• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA RTX 40-series "Ada" GPUs to Stick to PCI-Express Gen 4

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,301 (7.52/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
NVIDIA's next-generation GeForce "Ada" graphics architecture may stick to PCI-Express 4.0 as its system bus interface, according to kopite7kimi, a reliable source with NVIDIA leaks. This is unlike Ada's sister-architecture for compute, "Hopper," which leverages PCI-Express 5.0 in its AIC form-factor cards, for its shared memory pools and other resource-sharing features similar to CXL. This would make Ada the second graphics architecture from NVIDIA to use PCIe Gen 4, after the current-gen "Ampere." The previous-gen "Turing" used PCIe Gen 3. PCI-Express 4.0 x16 offers 32 GB/s per-direction bandwidth, and NVIDIA has implemented the Resizable-BAR feature with "Ampere," which lets the system see the entire dedicated video memory as one addressable block, rather than through tiny 256 MB apertures.

Despite using PCI-Express 4.0 for its host interface, GeForce "Ada" graphics cards are expected to extensively use the ATX 3.0 spec 16-pin power connector that the company debuted with the RTX 3090 Ti, particularly with higher-end GPUs that have typical board power above 225 W. The 16-pin connector is being marketed as a "PCIe Gen 5" generation standard, particularly by PSU manufacturers cashing in on early-adopter demand. All eyes are now on AMD's RDNA3 graphics architecture, on whether it's first to market with PCI-Express Gen 5, the way RDNA (RX 5000 series) was with PCIe Gen 4. The decision to stick with PCIe Gen 4 is particularly interesting given that Microsoft DirectStorage may gain use in the coming years, something that is expected to strain the system bus for the GPU, as SSD I/O transfer-rates increase with M.2 PCIe Gen 5 SSDs.



View at TechPowerUp Main Site | Source
 
Joined
Dec 4, 2021
Messages
53 (0.05/day)
Processor Ryzen R7 5700X
Motherboard Gigabyte X570 I Pro WiFi
Cooling Noctua NH-L12
Memory 32 GB LPX DDR4-3200-RAM
Video Card(s) Nvidia Geforce RTX 4080 FE 16 GB
Storage 2 TB Gigabyte Aorus NVMe SSD
Display(s) EIZO ColorEdge CG2700X
Case FormD T1 V2 SW
Audio Device(s) Naim Mu-so QB2
Power Supply Corsair SF750
Mouse Logitech MX Vertical
Keyboard Mode 65, Mode Sonnet, ZSA Moonlander
Software Windows 10
PCIe Gen 4 should be enough in nearly all use cases at this point. Only a few users need even more speed for file/data transfers on an everyday basis. Maybe with the RTX 50-series we'll see PCIe Gen 5 in like 2023/24.
 
Joined
Feb 15, 2019
Messages
1,666 (0.78/day)
System Name Personal Gaming Rig
Processor Ryzen 7800X3D
Motherboard MSI X670E Carbon
Cooling MO-RA 3 420
Memory 32GB 6000MHz
Video Card(s) RTX 4090 ICHILL FROSTBITE ULTRA
Storage 4x 2TB Nvme
Display(s) Samsung G8 OLED
Case Silverstone FT04
Day1 PCI-E 5 device = 0
Day180 PCI-E 5 device still= 0

Poor Intel
 
Joined
Apr 12, 2013
Messages
7,563 (1.77/day)
PCIe Gen 4 should be enough in nearly all use cases at this point. Only a few users need even more speed for file/data transfers on an everyday basis. Maybe with the RTX 50-series we'll see PCIe Gen 5 in like 2023/24.
I doubt that, it will increase the cost of GPU's as well. AMD/NVidia will avoid it for as long as they can, it's really not needed now with SLI or CrossFire a thing of the past.

Earliest I predict it being a part of mainstream is 2025, weirdly Intel may push it because it's the new entrant & this could be a differentiating factor they'd obviously wanna promote!
 
Joined
May 8, 2016
Messages
1,922 (0.61/day)
System Name BOX
Processor Core i7 6950X @ 4,26GHz (1,28V)
Motherboard X99 SOC Champion (BIOS F23c + bifurcation mod)
Cooling Thermalright Venomous-X + 2x Delta 38mm PWM (Push-Pull)
Memory Patriot Viper Steel 4000MHz CL16 4x8GB (@3240MHz CL12.12.12.24 CR2T @ 1,48V)
Video Card(s) Titan V (~1650MHz @ 0.77V, HBM2 1GHz, Forced P2 state [OFF])
Storage WD SN850X 2TB + Samsung EVO 2TB (SATA) + Seagate Exos X20 20TB (4Kn mode)
Display(s) LG 27GP950-B
Case Fractal Design Meshify 2 XL
Audio Device(s) Motu M4 (audio interface) + ATH-A900Z + Behringer C-1
Power Supply Seasonic X-760 (760W)
Mouse Logitech RX-250
Keyboard HP KB-9970
Software Windows 10 Pro x64
They can try to push it for low end segment to chop PCI-e lanes to "x1 5.0/6.0".
 
Joined
Jan 14, 2019
Messages
12,586 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
Good. Hopefully it will bring costs and prices down a notch. Not that we need gen 5 so soon anyway.
 
Joined
Aug 26, 2021
Messages
385 (0.32/day)
I really hope we don't see any own goals next gen like pcie4 x4 cards AMD released scuppering pcie 3 owners.
 
Joined
Apr 8, 2008
Messages
342 (0.06/day)
System Name Xajel Main
Processor AMD Ryzen 7 5800X
Motherboard ASRock X570M Steel Legened
Cooling Corsair H100i PRO
Memory G.Skill DDR4 3600 32GB (2x16GB)
Video Card(s) ZOTAC GAMING GeForce RTX 3080 Ti AMP Holo
Storage (OS) Gigabyte AORUS NVMe Gen4 1TB + (Personal) WD Black SN850X 2TB + (Store) WD 8TB HDD
Display(s) LG 38WN95C Ultrawide 3840x1600 144Hz
Case Cooler Master CM690 III
Audio Device(s) Built-in Audio + Yamaha SR-C20 Soundbar
Power Supply Thermaltake 750W
Mouse Logitech MK710 Combo
Keyboard Logitech MK710 Combo (M705)
Software Windows 11 Pro
While I wish it has PCIe 5.0 just for the sake of it. I know it won't matter even when MS DirectStorage becomes a thing.

I mean, PCIe 4.0 x16 will have the same bandwidth as PCIe 5.0 x8, meaning you need 2x PCIe 5.0 NVMe at full speed and on RAID 0 to really make things closer to a bottle neck.
But again, I wish it will have PCIe 5.0 just for the sake of it and coz other things will have it in it's lifetime.
 
Joined
Dec 25, 2020
Messages
7,022 (4.81/day)
Location
São Paulo, Brazil
System Name "Icy Resurrection"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard ASUS ROG Maximus Z790 Apex Encore
Cooling Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM
Memory 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD
Display(s) 55-inch LG G3 OLED
Case Pichau Mancer CV500 White Edition
Audio Device(s) Apple USB-C + Sony MDR-V7 headphones
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Classic Intellimouse
Keyboard IBM Model M type 1391405 (distribución española)
Software Windows 11 IoT Enterprise LTSC 24H2
Benchmark Scores I pulled a Qiqi~
Day1 PCI-E 5 device = 0
Day180 PCI-E 5 device still= 0

Poor Intel

I don't see how having forward-support for an interconnect as important as PCI Express is a bad thing
 
Joined
Dec 28, 2012
Messages
3,956 (0.90/day)
System Name Skunkworks 3.0
Processor 5800x3d
Motherboard x570 unify
Cooling Noctua NH-U12A
Memory 32GB 3600 mhz
Video Card(s) asrock 6800xt challenger D
Storage Sabarent rocket 4.0 2TB, MX 500 2TB
Display(s) Asus 1440p144 27"
Case Old arse cooler master 932
Power Supply Corsair 1200w platinum
Mouse *squeak*
Keyboard Some old office thing
Software Manjaro
PCIe Gen 4 should be enough in nearly all use cases at this point. Only a few users need even more speed for file/data transfers on an everyday basis. Maybe with the RTX 50-series we'll see PCIe Gen 5 in like 2023/24.
PCIe gen 3 is plenty enough. Technically PCIe gen 2 is plenty, the penalty for single GPU at 2.0 really only applies to 3090 tier GPUs.
 
Joined
Jan 14, 2019
Messages
12,586 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
PCIe gen 3 is plenty enough. Technically PCIe gen 2 is plenty, the penalty for single GPU at 2.0 really only applies to 3090 tier GPUs.
... on 16 lanes, I might add. You take 12 of those 16 away, like AMD did with the 6400 / 6500 XT, and you have a bit of a hit-and-miss situation.
 
Joined
Oct 8, 2006
Messages
173 (0.03/day)
thats fine if motherboards get pci-e v5. gpu's don't need to pay a premium for a standard they don't even come close to in terms of bandwidth usage. The real reason for pci-e v5 is going to be storage speed in m.2 ssd sticks and to have more channels open as well. I'm waiting for a cable standard to replace sata3 cables so we can have harddrives/ssd's runing on pci-e (cabled) instead of the dusty sata3 standard. we may be slowly going in that direction...
 
Joined
Aug 6, 2020
Messages
729 (0.46/day)
Hopefully, these second-gen PCIE4 slots can be more efficient - the fact 5that the Rtx 3050 is a 150w card is mostly due to this!

I mean, you can certainly have a bus-powered card, (if you castrate the connection to x4, and the bus to 64-bit - 6400 or you take the 3060, then under-clock to 3050-performance)
 
Joined
Jan 14, 2019
Messages
12,586 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
Hopefully, these second-gen PCIE4 slots can be more efficient - the fact 5that the Rtx 3050 is a 150w card is mostly due to this!

I mean, you can certainly have a bus-powered card, (if you castrate the connection to x4, and the bus to 64-bit - 6400 or you take the 3060, then under-clock to 3050-performance)
It's not the bus that consumes power, but the GPU and the settings it runs at. The reason why modern GPUs aren't nearly as efficient as they could be is due to the fact that both nvidia and AMD run them at the peak of their efficiency curves by default. No one asked the 6500 XT to run at 2800+ MHz and consume 100-120 Watts. It could have easily been a no-power-connector 75 W card at 2500-2600 MHz, but nooo! Performance is king nowadays, even in the lower segments.
 
Joined
Feb 18, 2005
Messages
5,847 (0.81/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) 3x AOC Q32E2N (32" 2560x1440 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G602
Keyboard Razer Pro Type Ultra
Software Windows 10 Professional x64
Considering that Hopper already has PCIe 5.0, I'm gonna go out on a limb and guess that Ada also has that capability, but NVIDIA has simply fused it off. Then when AMD launches its next series of GPUs, NVIDIA will launch a minor refresh of Ada that is PCIe 5.0-enabled to steal their thunder.
 
Joined
May 31, 2017
Messages
432 (0.16/day)
Processor Ryzen 5700X
Motherboard Gigabyte B550 Arous Elite V2
Cooling Thermalright PA120
Memory Kingston FURY Renegade 3600Mhz @ 3733 tight timings
Video Card(s) Sapphire Pulse RX 6800
Storage 36TB
Display(s) Samsung QN90A
Case be quiet! Dark Base Pro 900
Audio Device(s) Khadas Tone Pro 2, HD660s, KSC75, JBL 305 MK1
Power Supply Coolermaster V850 Gold V2
Mouse Roccat Burst Pro
Keyboard Dogshit with Otemu Brown
Software W10 LTSC 2021
pcie5 on motherboard is enough for storage purposes
 
Joined
Feb 18, 2021
Messages
90 (0.06/day)
Processor Ryzen 7950X3D
Motherboard Asus ROG Crosshair X670E Hero
Cooling Corsair iCUE H150i ELITE LCD
Memory 64GB (2X 32GB) Corsair Dominator Platinum RGB DDR4 60000Mhz CL30
Video Card(s) Zotac GeForce RTX 4090 AMP Extreme AIRO 24GB
Storage WD SN850X 4TB NVMe / Samsung 870 QVO 8TB
Display(s) Asus PG43UQ / Samsung 32" UJ590
Case Phanteks Evolv X
Power Supply Corsair AX1600i
Mouse Logitech MX Master 3
Keyboard Corsair K95 RGB Platinum
Software Windows 11 Pro 24H2
We haven't maxed out 16x PCI-E 3.0 just yet let alone 4.0, 5.0 at least on GPU's for now is pointless.
 
Joined
Oct 27, 2020
Messages
799 (0.53/day)
I don't know about Nvidia, but AMD might use PCI-E 5.0.
If the rumours are true and the frequency of Navi 31 is 3GHz, then it will have logically double the pixel-fillrate of Navi 21 (Navi 33 64RBs half of Navi 21, Navi 32 128RBs, Navi 31 192RBs) while the memory bus will still be 256bit and with the GDDR6 being at 6950XT level or slightly more (20Gbps?, the Samsung 24Gbps option probably shouldn't be ready for launch) it will need to throw the kitchen sink in order not to be bandwidth limited, so at least 256MB infinity cache (rumour is for 512MB which is an insane amount of transistors/mm2 on 6nm, without the additional logic that the module will incorporate, we are talking for at least 24 billion transistors and more than 250mm² just for the 512MB cache portion of the chiplet) , PCI-E 5.0, better compression etc in order to help with all the memory access related issues of the memory stack.
 
Last edited:
Joined
Aug 6, 2020
Messages
729 (0.46/day)
I don't know about Nvidia, but AMD might use PCI-E 5.0.
If the rumours are true and the frequency of Navi 31 is 3GHz, then it will have logically double the pixel-fillrate of Navi 21 (Navi 33 64RBs half of Navi 21, Navi 32 128RBs, Navi 31 192RBs) while the memory bus will still be 256bit and with the GDDR6 being at 6950XT level or slightly more (20Gbps?, the Samsung 24Gbps option probably shouldn't be ready for launch) it will need to throw the kitchen sink in order not to be bandwidth limited, so at least 256MB infinity cache (rumour is for 512MB which is an insane amount of transistors/mm2 on 6nm, without the additional logic that the module will incorporate, we are talking for at least 24 billion transistors and more than 250mm² just for the 512MB cache portion of the chiplet) , PCI-E 5.0, better compression etc in order to help with all the memory access related issues of the memory stack.

We'll see - they were already bandwidth-limited on Big Navi!

Then there's the added overhead of getting two chiplets to talk to each other (all while sharing the same castrated 256-bit bus!) The only thing improved in this aspect is the doubling of infinity cache, plus a 10% bump in GDDR6 clock!

You'll be lucky if the performance is 40% faster at 4k
 
Last edited:
Joined
Aug 20, 2007
Messages
21,544 (3.40/day)
System Name Pioneer
Processor Ryzen R9 9950X
Motherboard GIGABYTE Aorus Elite X670 AX
Cooling Noctua NH-D15 + A whole lotta Sunon and Corsair Maglev blower fans...
Memory 64GB (4x 16GB) G.Skill Flare X5 @ DDR5-6000 CL30
Video Card(s) XFX RX 7900 XTX Speedster Merc 310
Storage Intel 905p Optane 960GB boot, +2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs
Display(s) 55" LG 55" B9 OLED 4K Display
Case Thermaltake Core X31
Audio Device(s) TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED
Power Supply FSP Hydro Ti Pro 850W
Mouse Logitech G305 Lightspeed Wireless
Keyboard WASD Code v3 with Cherry Green keyswitches + PBT DS keycaps
Software Gentoo Linux x64 / Windows 11 Enterprise IoT 2024
We haven't maxed out 16x PCI-E 3.0 just yet let alone 4.0, 5.0 at least on GPU's for now is pointless.
Actually with the 3090 and above we have, but barely. There is a consistent but present gain of around 2% between pcie 3.0 x16 and pcie 4.0 x16.

See TPUs review on this.
 
Joined
Oct 27, 2020
Messages
799 (0.53/day)
We'll see - they were already bandwidth-limited on Big Navi!

Then there's the added overhead of getting two chiplets to talk to each other (all while sharing the same castrated 256-bit bus!) The only thing improved in this aspect is the doubling of infinity cache, plus a 10% bump in GDDR6 clock!

You'll be lucky if the performance is 40% faster at 4k
lol 40%, what are you talking about?
Now seriously, the easy prediction i can offer is that if infinity cache is 512MB, AMD will try to label Navi 31 as an 8K capable card, forcing reviewers to examine 8K resolution and will try to change the narrative into how it can win in some titles in 8K res vs Nvidia AD102 with it's only 96MB cache, when in reality the comparison should have been at 4K where the 96MB for the Nvidia Architecture is just fine...
 
Joined
Jul 5, 2013
Messages
28,279 (6.75/day)
There is a consistent but present gain of around 2% between pcie 3.0 x16 and pcie 4.0 x16.
That has to do more with the improved latency of PCIe4.0 than the raw bandwidth. We're still a little ways off from fully saturating the PCIe3.0 16x bus. But in fairness, we are close.
See TPUs review on this.
W1zzard did some testing with a RX5700XT.
The difference between PCIe 2.0, 3.0 and 4.0 were 1% or 2%, depending on the resolution.

The next year he followed up with a more testing, but this time with a 3080.
In this series of testing, he included PCIe1.1 spec in the 16x lanes as well as PCIe1.1 8x lanes. For PCIe 2.0 the difference where few percent more, but PCIe 3.0 and 4.0 were still at a 1% difference, not enough to be at all worried about.

While a 3090 or a 3090ti are faster than a 3080, they are not so much faster as to present serious PCIe bus bottlenecking situation for the PCIe 3.0 bus, and minimal bottlenecking for PCIe 2.0.
 
Joined
Aug 20, 2007
Messages
21,544 (3.40/day)
System Name Pioneer
Processor Ryzen R9 9950X
Motherboard GIGABYTE Aorus Elite X670 AX
Cooling Noctua NH-D15 + A whole lotta Sunon and Corsair Maglev blower fans...
Memory 64GB (4x 16GB) G.Skill Flare X5 @ DDR5-6000 CL30
Video Card(s) XFX RX 7900 XTX Speedster Merc 310
Storage Intel 905p Optane 960GB boot, +2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs
Display(s) 55" LG 55" B9 OLED 4K Display
Case Thermaltake Core X31
Audio Device(s) TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED
Power Supply FSP Hydro Ti Pro 850W
Mouse Logitech G305 Lightspeed Wireless
Keyboard WASD Code v3 with Cherry Green keyswitches + PBT DS keycaps
Software Gentoo Linux x64 / Windows 11 Enterprise IoT 2024
lol 40%, what are you talking about?
Now seriously, the easy prediction i can offer is that if infinity cache is 512MB, AMD will try to label Navi 31 as an 8K capable card, forcing reviewers to examine 8K resolution and will try to change the narrative into how it can win in some titles in 8K res vs Nvidia AD102 with it's only 96MB cache, when in reality the comparison should have been at 4K where the 96MB for the Nvidia Architecture is just fine...
I mean nvidia already attempted to brand the GA102 chip as an 8k chip. No one took it seriously.
 
  • Haha
Reactions: ARF
Joined
Oct 27, 2020
Messages
799 (0.53/day)
I mean nvidia already attempted to brand the GA102 chip as an 8k chip. No one took it seriously.
And rightfully so but this time if the performance is 2.5X vs 6900XT according to leaks (i doubt it will reach this level, probably 2.3X-2.4X at 4K) then it will offer the same experience at 8K as RX 6800 had at launch at 4K (AMD branded 6800 as an 4K card at launch).
The problem is that there aren't many people having 8K displays (and those aren't OLED based unless we are talking about 10.000€-30.000€ TVs) and the forecast is that it will take around 5 years for 8K OLEDs to reach mainstream prices (when Chinese manufacturers gradually ramp up production in the next 5 years - let's see how the war and its reverberations will escalate first :(
And honestly I'm sick and tired every time AMD tries to change the narrative however they fit them (for example they decided for 6650XT, a $400-500 SRP card depending on the brand/model to place it as an 1080p card, guiding the reviewers to take priority in their conclusions for the 1080p difference vs the competition due to the fact that infinity cache size kills the performance at higher resolutions (for example a MSI RTX 3060 Gaming X is 1-2% faster than a MSI RX 6650XT Gaming X at 4K)
Sure 4K isn't the intended resolution for these cards, but there are many games like Doom Eternal, Resident Evil 3, F1 2020 etc + older ones of course that the MSI RTX 3060 Gaming X averaging more than 60fps at 4K max settings, now add to that all the games that with slightly lower settings can hit 60fps at 4K with very minor visual differences and the catalog isn't small.
But set aside the 4K argument which rightfully shouldn't be the main criteria for 3060-6650 performance difference, why not 1440p, is it too much to ask for a 400-500 SRP card? (after all the average fps in TPU setup, a 6650XT is hitting at QHD is very similar with a reference 6900XT at 4K, around 82 vs 87 average fps, so if 6900XT is fine for 4K why not QHD for 6650XT?) Is it the fact that $399 SRP 3060Ti is around 20% faster in QHD and +37% in 4K for example? (reference vs reference or OC vs OC)
 
Last edited:
Joined
Jan 14, 2019
Messages
12,586 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
That has to do more with the improved latency of PCIe4.0 than the raw bandwidth. We're still a little ways off from fully saturating the PCIe3.0 16x bus. But in fairness, we are close.

W1zzard did some testing with a RX5700XT.
The difference between PCIe 2.0, 3.0 and 4.0 were 1% or 2%, depending on the resolution.

The next year he followed up with a more testing, but this time with a 3080.
In this series of testing, he included PCIe1.1 spec in the 16x lanes as well as PCIe1.1 8x lanes. For PCIe 2.0 the difference where few percent more, but PCIe 3.0 and 4.0 were still at a 1% difference, not enough to be at all worried about.

While a 3090 or a 3090ti are faster than a 3080, they are not so much faster as to present serious PCIe bus bottlenecking situation for the PCIe 3.0 bus, and minimal bottlenecking for PCIe 2.0.
I think the difference between PCI-e versions is nothing to be concerned about with a normal x16 graphics card. It's more of an issue on fewer lanes, like the x4 of the Radeon RX 6400 and 6500 XT.
 
Top