• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 3080 PCI-Express Scaling

Joined
Apr 16, 2019
Messages
632 (0.31/day)
Is it? It's about the very subject matter, is it not?

Edited out parts not in compliance with forum guidelines. Please try to avoid such comments in the future. - TPU Moderation
 
Last edited by a moderator:
Joined
Jul 5, 2013
Messages
27,483 (6.63/day)
Average framerate data isn't nearly as relevant as frametime analysis when all that changes is the interface bandwidth, can't believe authors of this article didn't thought of that.
They did. It's not relevant enough to warrant a detailed analysis.

What would be interesting, and I mentioned this in another thread already, would be a run of tests that show performance on actual period correct hardware. However, it is possible that @W1zzard does not have sample hardware available for such a series of tests. Not that it is critical. The information rendered in this article gives a good reference point to understand the limitations of each PCIe spec.

Still, it would be interesting to see the effect other potential limitations have on the result. For example, CPU, chipset and RAM throughput. The PCIe bus spec is only one part of that equation.
 
Last edited by a moderator:
Joined
Mar 21, 2016
Messages
2,508 (0.79/day)
I don't think PCIe 4.0 is a big deal yet, but I do think that bandwidth will more readily be taken advantage of going forward. Cache acceleration integrated into GPU design has big potential especially for AMD who's got all the IP already available to leverage it extremely well. Intel in some ways has even more IP to leverage that sort of thing due to Optane, but depends how you look at it because Optane is inferior to DDR4 for example in terms of sheer speed which in this scenario is more vital. I'd really love to see how far a individual DDR4 very well binned chip can scale paired with a Zen 3 CPU perhaps a 2c or 4c variant. Something cut down designed simply for cache acceleration, decompression, and compression needs on a GPU. It does seem a ARM acquisition could shake things up a fair bit Nvidia will then have the ability to do similar w/o resorting to licensing ARM chip designs. Intel as well can obviously so and with added addition of a cache layer of Optane not to mention they did some interesting stuff with the desktop Broadwell chip prior to Skylake on the cache side of things with the EDRAM in this type of scenario incorporating a bit of that might work great for it's intended use.
 

Ruru

S.T.A.R.S.
Joined
Dec 16, 2012
Messages
12,608 (2.90/day)
Location
Jyväskylä, Finland
System Name 4K-gaming
Processor AMD Ryzen 7 5800X @ PBO +200 -20CO
Motherboard Asus ROG Crosshair VII Hero
Cooling Arctic Freezer 50, EKWB Vector TUF
Memory 32GB Kingston HyperX Fury DDR4-3466
Video Card(s) Asus GeForce RTX 3080 TUF OC 10GB
Storage A pack of SSDs totaling 3.2TB + 3TB HDDs
Display(s) 27" 4K120 IPS + 32" 4K60 IPS + 24" 1080p60
Case Corsair 4000D Airflow White
Audio Device(s) Asus TUF H3 Wireless / Corsair HS35
Power Supply EVGA Supernova G2 750W
Mouse Logitech MX518 + Asus ROG Strix Edge Nordic
Keyboard Roccat Vulcan 121 AIMO
VR HMD Oculus Rift CV1
Software Windows 11 Pro
Benchmark Scores It runs Crysis
Great review as always with these PCIe scaling reviews. :toast:
 
Joined
Aug 24, 2004
Messages
217 (0.03/day)
He's testing 4.0 because the AMD platform is the only one with pcie 4.0!

Edited out parts not in compliance with forum guidelines. Please try to avoid such comments in the future. - TPU Moderation
 
Last edited by a moderator:
Joined
Dec 16, 2010
Messages
1,668 (0.33/day)
Location
State College, PA, US
System Name My Surround PC
Processor AMD Ryzen 9 7950X3D
Motherboard ASUS STRIX X670E-F
Cooling Swiftech MCP35X / EK Quantum CPU / Alphacool GPU / XSPC 480mm w/ Corsair Fans
Memory 96GB (2 x 48 GB) G.Skill DDR5-6000 CL30
Video Card(s) MSI NVIDIA GeForce RTX 4090 Suprim X 24GB
Storage WD SN850 2TB, Samsung PM981a 1TB, 4 x 4TB + 1 x 10TB HGST NAS HDD for Windows Storage Spaces
Display(s) 2 x Viotek GFI27QXA 27" 4K 120Hz + LG UH850 4K 60Hz + HMD
Case NZXT Source 530
Audio Device(s) Sony MDR-7506 / Logitech Z-5500 5.1
Power Supply Corsair RM1000x 1 kW
Mouse Patriot Viper V560
Keyboard Corsair K100
VR HMD HP Reverb G2
Software Windows 11 Pro x64
Benchmark Scores Mellanox ConnectX-3 10 Gb/s Fiber Network Card
So that leaves me with a conundrum. I have an Intel desktop platform with only 16 PCIe 3.0 lanes. Do I run the GPU at x8 and put my SSD on the remaining CPU lanes or do I run the GPU at x16 and put the SSD on the slower PCH lanes? It's not a simple answer if the GPU is reading diectly from the SSD and both need the bandwidth. This article says I lose about 3% when I reduce the GPU to x8 but I don't know how much the SSD benefits.
 

good11

New Member
Joined
May 27, 2020
Messages
6 (0.00/day)
When RTX 3090 is out, Please test PCI-E 4.0 vs 3.0 again with RTX 3090 SLI.
 
Joined
Jul 4, 2018
Messages
120 (0.05/day)
Location
Seattle area, Wa
System Name Not pretty
Processor Ryzen 9 9950x
Motherboard Crosshair X870E
Cooling 420mm Arctic LF III, for now
Memory 64GB, DDR5-6000 cl30, G.Skill
Video Card(s) EVGA FTW3 RTX 3080ti
Storage 1TB Samsung 980 Pro (Win10), 2TB WD SN850X (Win11)
Display(s) old 27" Viewsonic 1080p, Asus 1080p, Viewsonic 4k
Case Corsair Obsidian 900D
Power Supply Super Flower
Benchmark Scores Cinebench r15, w/ 1680v2 @ 4.6ghz and XMP enabled, 1648 1680v2 @ 4.7ghz RAM @ stock 1333MT/s, 1696
Is there going to be a PCI-E Scaling benchmark for the RTX 3090?
What I'd like to see is if there is any variation at 8k, since 4k is getting taxed the most.
 
Last edited:

Frick

Fishfaced Nincompoop
Joined
Feb 27, 2006
Messages
19,468 (2.85/day)
Location
Piteå
System Name White DJ in Detroit
Processor Ryzen 5 5600
Motherboard Asrock B450M-HDV
Cooling Be Quiet! Pure Rock 2
Memory 2 x 16GB Kingston Fury 3400mhz
Video Card(s) XFX 6950XT Speedster MERC 319
Storage Kingston A400 240GB | WD Black SN750 2TB |WD Blue 1TB x 2 | Toshiba P300 2TB | Seagate Expansion 8TB
Display(s) Samsung U32J590U 4K + BenQ GL2450HT 1080p
Case Fractal Design Define R4
Audio Device(s) Line6 UX1 + Sony MDR-10RC, Nektar SE61 keyboard
Power Supply Corsair RM850x v3
Mouse Logitech G602
Keyboard Cherry MX Board 1.0 TKL Brown
Software Windows 10 Pro
Benchmark Scores Rimworld 4K ready!
It looks like the main advantages of PCI-E 4.0 are only for storage use. Makes sense if you do a lot of sequential data transfer.

Also dual port 40Gb/s NIC's, which uses x8 as standard.

 
Joined
Apr 18, 2015
Messages
234 (0.07/day)
No bud, not for the reasons of pcie scaling - Hardware Unboxed tested the 3080 exclusively on Ryzen! :roll:

Thats because they had a poll with their viewers, and 88% or so choose to benchmark 3080 with Ryzens.
Which makes sense seing most budget conscious DIY users jumped to Ryzens.
 
Joined
Jul 5, 2019
Messages
317 (0.16/day)
Location
Berlin, Germany
System Name Workhorse
Processor 13900K 5.9 Ghz single core (2x) 5.6 Ghz Allcore @ -0.15v offset / 4.5 Ghz e-core -0.15v offset
Motherboard MSI Z690A-Pro DDR4
Cooling Arctic Liquid Cooler 360 3x Arctic 120 PWM Push + 3x Arctic 140 PWM Pull
Memory 2 x 32GB DDR4-3200-CL16 G.Skill RipJaws V @ 4133 Mhz CL 18-22-42-42-84 2T 1.45v
Video Card(s) RX 6600XT 8GB
Storage PNY CS3030 1TB nvme SSD, 2 x 3TB HDD, 1x 4TB HDD, 1 x 6TB HDD
Display(s) Samsung 34" 3440x1400 60 Hz
Case Coolermaster 690
Audio Device(s) Topping Dx3 Pro / Denon D2000 soon to mod it/Fostex T50RP MK3 custom cable and headband / Bose NC700
Power Supply Enermax Revolution D.F. 850W ATX 2.4
Mouse Logitech G5 / Speedlink Kudos gaming mouse (12 years old)
Keyboard A4Tech G800 (old) / Apple Magic keyboard
So that leaves me with a conundrum. I have an Intel desktop platform with only 16 PCIe 3.0 lanes. Do I run the GPU at x8 and put my SSD on the remaining CPU lanes or do I run the GPU at x16 and put the SSD on the slower PCH lanes? It's not a simple answer if the GPU is reading diectly from the SSD and both need the bandwidth. This article says I lose about 3% when I reduce the GPU to x8 but I don't know how much the SSD benefits.
So to put it in anothe way. If you gain ~2-3% by going AMD with PCIe 4.0, you get those 2-3% and close out the battle with intel cpus on 1440p and 4k on current Ryzen 30xx processors.
I guess the difference will be even larger with the usage of the new Zen 3 Ryzen 40xx or 50xx processors (which ever they name them).
 
Joined
Jun 6, 2007
Messages
441 (0.07/day)
Location
Manchester, UK
System Name Colin #2 - the revenge!
Processor Ryzen 7 5800X3D
Motherboard Gigabyte B550 Aorus Elite V2
Cooling 4x Phanteks SK140 PWM & Arctic Freezer II 280 AIO
Memory TeamGroup Dark Pro 8 Pack 2 x16Gb dual rank B-die 3733MHz CL16
Video Card(s) MSI RTX 4080 Suprim X
Storage WD SN850X 1Tb + WD SN770 2Tb
Display(s) MSI MPG321URX
Case Phanteks P500A
Audio Device(s) Realtek ALC1200/1220
Power Supply 750W Corsair RM750
VR HMD PSVR2
Considering how well the GPU still tends to scale even on 2.0 and even 1.0 in some cases, I'd like to see, just mostly for fun, how much FPS is possible in an ancient rig running 1.1 PCIe and the top CPU of the time period at 1080p minimum (an extreme case of blowing the budget on GPU-only upgrades for old rigs). Given that earlier Ryzens ran on PCIe 2.0, it's not too surprising to see GPUs still able to provide respectable numbers on PCIe 2.0 in still a fairly modern setup.

That said, I could still see FPS chasers waving around these graphs and insisting that they absolutely must upgrade to Ryzen 3000 or the upcoming 4000 NOW for that extra .5% FPS boost. Which would perfectly benefit AMD's CPU division and those mobo companies slightly burned by Intel's delay on 4.0 capable CPUs (from a video where GN mentioned the topic).

If anyone wants to send me a 3080 i'll happily test on my "last-of-the pci-e 2.0" 2600k :roll:

I think it's probably time to upgrade. I'm pretty sure this hardware all belongs in the retro forum now!

Ok, here's a video testing a 3080 with an FX CPU(which are all PCIe 2.0). Should have known Greg would do a test like this;

I watched this vid with interest as my CPU is from same era. I ran the timespy extreme test on my rig and this is my result

Big difference in the CPU score!
 

Attachments

  • 2338.jpg
    2338.jpg
    497.1 KB · Views: 122
Joined
Feb 20, 2019
Messages
8,211 (3.93/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
Wow, PCIe 2.0 is finally starting to show a measurable performance penalty, provided you try and plug an $800 graphics card into a board from Core2/PhenomII era.



PSA, FUTURE 3080 OWNERS:
DO NOT USE A CORE2 DUO.
THE PCIe 2.0 BANDWIDTH WILL BE A BOTTLENECK*




* - I think there may be some other bottlenecks too.
 
Joined
Aug 24, 2004
Messages
217 (0.03/day)
Wow, PCIe 2.0 is finally starting to show a measurable performance penalty, provided you try and plug an $800 graphics card into a board from Core2/PhenomII era.



PSA, FUTURE 3080 OWNERS:
DO NOT USE A CORE2 DUO.
THE PCIe 2.0 BANDWIDTH WILL BE A BOTTLENECK*




* - I think there may be some other bottlenecks too.

Going to plug my 3090 into my old socket 775 system when it gets here. I'll let you know how it works out.
 
Joined
Jun 16, 2010
Messages
20 (0.00/day)
System Name Eskwy
Processor 4790k @ 4.4Ghz
Motherboard Asus Hero VII
Cooling Stock cooler for the moment
Memory 8GB Kingston HyperX Genesis 2133Mhz
Video Card(s) Gtx 1070 strix oc
Storage 2x Western Digital 1 To -1x Samung Evo 840 256GB - 1x OCZ Vertex 4 128GB
Display(s) ASUS VG248QE 144hz
Case Antec Twelve Hundred
Power Supply Corsair HX 620W
Software Windows 10 Pro x64
And what about temps ?
 
Last edited:
Joined
Dec 31, 2009
Messages
19,371 (3.57/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
Thats because they had a poll with their viewers, and 88% or so choose to benchmark 3080 with Ryzens.
Which makes sense seing most budget conscious DIY users jumped to Ryzens.
Maybe looking for validation of platform superiority, but turns out its useful in other areas most don't utilize.
And what about temps ?
Look at the review of the GPU, not the PCIe scaling article. ;)

 
Joined
Dec 16, 2010
Messages
1,668 (0.33/day)
Location
State College, PA, US
System Name My Surround PC
Processor AMD Ryzen 9 7950X3D
Motherboard ASUS STRIX X670E-F
Cooling Swiftech MCP35X / EK Quantum CPU / Alphacool GPU / XSPC 480mm w/ Corsair Fans
Memory 96GB (2 x 48 GB) G.Skill DDR5-6000 CL30
Video Card(s) MSI NVIDIA GeForce RTX 4090 Suprim X 24GB
Storage WD SN850 2TB, Samsung PM981a 1TB, 4 x 4TB + 1 x 10TB HGST NAS HDD for Windows Storage Spaces
Display(s) 2 x Viotek GFI27QXA 27" 4K 120Hz + LG UH850 4K 60Hz + HMD
Case NZXT Source 530
Audio Device(s) Sony MDR-7506 / Logitech Z-5500 5.1
Power Supply Corsair RM1000x 1 kW
Mouse Patriot Viper V560
Keyboard Corsair K100
VR HMD HP Reverb G2
Software Windows 11 Pro x64
Benchmark Scores Mellanox ConnectX-3 10 Gb/s Fiber Network Card
So to put it in anothe way. If you gain ~2-3% by going AMD with PCIe 4.0, you get those 2-3% and close out the battle with intel cpus on 1440p and 4k on current Ryzen 30xx processors.
I guess the difference will be even larger with the usage of the new Zen 3 Ryzen 40xx or 50xx processors (which ever they name them).
That's not quite what I was referring to. I'm talking about PCIe 3.0 x8 vs PCIe 3.0 x16. When you go to PCIe 4.0, the difference between x8 and x16 is negligible. But most people with Intel platforms just run the GPU at PCIe 3.0 x16 and don't worry about the SSD's bandwidth on the PCH. I'm wondering if I should worry about SSD bandwidth.
 

mkontra

New Member
Joined
Sep 17, 2020
Messages
2 (0.00/day)
Why not benchmark loading times? That's when data are sent to the GPU. Isn't it more relevant than FPS?
 
Joined
Dec 31, 2009
Messages
19,371 (3.57/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
Why not benchmark loading times? That's when data are sent to the GPU. Isn't it more relevant than FPS?
Nope. Not relevant at all. Loading times of games is primarily a hard drive limitation. vRAM is faster than any NVMe SSD by leaps and bounds. ;)
 

mkontra

New Member
Joined
Sep 17, 2020
Messages
2 (0.00/day)
Nope. Not relevant at all. Loading times of games is primarily a hard drive limitation. vRAM is faster than any NVMe SSD by leaps and bounds. ;)

But the level data has to move from CPU to GPU through PCIe lanes. Isn't it when there is a chance to saturate the PCIe bandwidth if the NVMe SSD is PCIe 4.0 compliant?
 
Joined
Sep 17, 2020
Messages
17 (0.01/day)
System Name Custom Rig
Processor Intel x5675 6/12 Corez @ 4.6ghz 24/7 STABLE!
Motherboard Asus P6T Deluxe V2 X58
Cooling Noctua NH-D15 Chromamax Black
Memory 24GB Patriot Sector 7 @ 1603mhz 9-9-9-20 1T
Video Card(s) Sapphire RX 580 Nitro+ 8gb Special Edition Factory OC'ed
Storage NVME M.2 1TB ADATA SX8200 Pro XPG SSD & 1TB Adata SE800 Ultra Fast USB 3.2/USB C Ext SSD Drive
Display(s) 24" Samsung Syncmaster
Case NZXT Alpha Case Side Window Edition w/ Airflow Mods & Custom Painted Metallic Black Interior
Power Supply OCZ ModXStream Pro 600W 80+ Gold
Mouse Logitech MK700 Wireless mouse & keyboard
Software Dual Boot - Windows 10 + Hackintosh
Benchmark Scores Available upon request!
I guess I'll stick with my hexacore x58 running on 2.0 :) and with my NVME my PC even more future proof!
 
Joined
Jul 5, 2013
Messages
27,483 (6.63/day)
PSA, FUTURE 3080 OWNERS:
DO NOT USE A CORE2 DUO/QUAD CPU.
THAT CPU SERIES WILL BE A HUGE BOTTLENECK*
Fixed that for you. The information stated in the above article shows that PCIe 2.0 isn't that great of a bottleneck. Additionally most of the C2D/C2Q series chipsets were PCIe 1.1 not 2.0 as you stated. The PCIe 2.0 standard was not adopted until the P4x, G4x, Q4x and X38/X48 chipsets which was late in the product lifecycle.
 
Top