• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 3080 PCI-Express Scaling

Is it? It's about the very subject matter, is it not?

Edited out parts not in compliance with forum guidelines. Please try to avoid such comments in the future. - TPU Moderation
 
Last edited by a moderator:
Average framerate data isn't nearly as relevant as frametime analysis when all that changes is the interface bandwidth, can't believe authors of this article didn't thought of that.
They did. It's not relevant enough to warrant a detailed analysis.

What would be interesting, and I mentioned this in another thread already, would be a run of tests that show performance on actual period correct hardware. However, it is possible that @W1zzard does not have sample hardware available for such a series of tests. Not that it is critical. The information rendered in this article gives a good reference point to understand the limitations of each PCIe spec.

Still, it would be interesting to see the effect other potential limitations have on the result. For example, CPU, chipset and RAM throughput. The PCIe bus spec is only one part of that equation.
 
Last edited by a moderator:
I don't think PCIe 4.0 is a big deal yet, but I do think that bandwidth will more readily be taken advantage of going forward. Cache acceleration integrated into GPU design has big potential especially for AMD who's got all the IP already available to leverage it extremely well. Intel in some ways has even more IP to leverage that sort of thing due to Optane, but depends how you look at it because Optane is inferior to DDR4 for example in terms of sheer speed which in this scenario is more vital. I'd really love to see how far a individual DDR4 very well binned chip can scale paired with a Zen 3 CPU perhaps a 2c or 4c variant. Something cut down designed simply for cache acceleration, decompression, and compression needs on a GPU. It does seem a ARM acquisition could shake things up a fair bit Nvidia will then have the ability to do similar w/o resorting to licensing ARM chip designs. Intel as well can obviously so and with added addition of a cache layer of Optane not to mention they did some interesting stuff with the desktop Broadwell chip prior to Skylake on the cache side of things with the EDRAM in this type of scenario incorporating a bit of that might work great for it's intended use.
 
Great review as always with these PCIe scaling reviews. :toast:
 
He's testing 4.0 because the AMD platform is the only one with pcie 4.0!

Edited out parts not in compliance with forum guidelines. Please try to avoid such comments in the future. - TPU Moderation
 
Last edited by a moderator:
So that leaves me with a conundrum. I have an Intel desktop platform with only 16 PCIe 3.0 lanes. Do I run the GPU at x8 and put my SSD on the remaining CPU lanes or do I run the GPU at x16 and put the SSD on the slower PCH lanes? It's not a simple answer if the GPU is reading diectly from the SSD and both need the bandwidth. This article says I lose about 3% when I reduce the GPU to x8 but I don't know how much the SSD benefits.
 
When RTX 3090 is out, Please test PCI-E 4.0 vs 3.0 again with RTX 3090 SLI.
 
Is there going to be a PCI-E Scaling benchmark for the RTX 3090?
What I'd like to see is if there is any variation at 8k, since 4k is getting taxed the most.
 
Last edited:
It looks like the main advantages of PCI-E 4.0 are only for storage use. Makes sense if you do a lot of sequential data transfer.

Also dual port 40Gb/s NIC's, which uses x8 as standard.

 
No bud, not for the reasons of pcie scaling - Hardware Unboxed tested the 3080 exclusively on Ryzen! :roll:

Thats because they had a poll with their viewers, and 88% or so choose to benchmark 3080 with Ryzens.
Which makes sense seing most budget conscious DIY users jumped to Ryzens.
 
So that leaves me with a conundrum. I have an Intel desktop platform with only 16 PCIe 3.0 lanes. Do I run the GPU at x8 and put my SSD on the remaining CPU lanes or do I run the GPU at x16 and put the SSD on the slower PCH lanes? It's not a simple answer if the GPU is reading diectly from the SSD and both need the bandwidth. This article says I lose about 3% when I reduce the GPU to x8 but I don't know how much the SSD benefits.
So to put it in anothe way. If you gain ~2-3% by going AMD with PCIe 4.0, you get those 2-3% and close out the battle with intel cpus on 1440p and 4k on current Ryzen 30xx processors.
I guess the difference will be even larger with the usage of the new Zen 3 Ryzen 40xx or 50xx processors (which ever they name them).
 
Considering how well the GPU still tends to scale even on 2.0 and even 1.0 in some cases, I'd like to see, just mostly for fun, how much FPS is possible in an ancient rig running 1.1 PCIe and the top CPU of the time period at 1080p minimum (an extreme case of blowing the budget on GPU-only upgrades for old rigs). Given that earlier Ryzens ran on PCIe 2.0, it's not too surprising to see GPUs still able to provide respectable numbers on PCIe 2.0 in still a fairly modern setup.

That said, I could still see FPS chasers waving around these graphs and insisting that they absolutely must upgrade to Ryzen 3000 or the upcoming 4000 NOW for that extra .5% FPS boost. Which would perfectly benefit AMD's CPU division and those mobo companies slightly burned by Intel's delay on 4.0 capable CPUs (from a video where GN mentioned the topic).

If anyone wants to send me a 3080 i'll happily test on my "last-of-the pci-e 2.0" 2600k :roll:

I think it's probably time to upgrade. I'm pretty sure this hardware all belongs in the retro forum now!

Ok, here's a video testing a 3080 with an FX CPU(which are all PCIe 2.0). Should have known Greg would do a test like this;

I watched this vid with interest as my CPU is from same era. I ran the timespy extreme test on my rig and this is my result

Big difference in the CPU score!
 

Attachments

  • 2338.jpg
    2338.jpg
    497.1 KB · Views: 153
Wow, PCIe 2.0 is finally starting to show a measurable performance penalty, provided you try and plug an $800 graphics card into a board from Core2/PhenomII era.



PSA, FUTURE 3080 OWNERS:
DO NOT USE A CORE2 DUO.
THE PCIe 2.0 BANDWIDTH WILL BE A BOTTLENECK*




* - I think there may be some other bottlenecks too.
 
Wow, PCIe 2.0 is finally starting to show a measurable performance penalty, provided you try and plug an $800 graphics card into a board from Core2/PhenomII era.



PSA, FUTURE 3080 OWNERS:
DO NOT USE A CORE2 DUO.
THE PCIe 2.0 BANDWIDTH WILL BE A BOTTLENECK*




* - I think there may be some other bottlenecks too.

Going to plug my 3090 into my old socket 775 system when it gets here. I'll let you know how it works out.
 
And what about temps ?
 
Last edited:
Thats because they had a poll with their viewers, and 88% or so choose to benchmark 3080 with Ryzens.
Which makes sense seing most budget conscious DIY users jumped to Ryzens.
Maybe looking for validation of platform superiority, but turns out its useful in other areas most don't utilize.
And what about temps ?
Look at the review of the GPU, not the PCIe scaling article. ;)

 
So to put it in anothe way. If you gain ~2-3% by going AMD with PCIe 4.0, you get those 2-3% and close out the battle with intel cpus on 1440p and 4k on current Ryzen 30xx processors.
I guess the difference will be even larger with the usage of the new Zen 3 Ryzen 40xx or 50xx processors (which ever they name them).
That's not quite what I was referring to. I'm talking about PCIe 3.0 x8 vs PCIe 3.0 x16. When you go to PCIe 4.0, the difference between x8 and x16 is negligible. But most people with Intel platforms just run the GPU at PCIe 3.0 x16 and don't worry about the SSD's bandwidth on the PCH. I'm wondering if I should worry about SSD bandwidth.
 
Why not benchmark loading times? That's when data are sent to the GPU. Isn't it more relevant than FPS?
 
Why not benchmark loading times? That's when data are sent to the GPU. Isn't it more relevant than FPS?
Nope. Not relevant at all. Loading times of games is primarily a hard drive limitation. vRAM is faster than any NVMe SSD by leaps and bounds. ;)
 
Nope. Not relevant at all. Loading times of games is primarily a hard drive limitation. vRAM is faster than any NVMe SSD by leaps and bounds. ;)

But the level data has to move from CPU to GPU through PCIe lanes. Isn't it when there is a chance to saturate the PCIe bandwidth if the NVMe SSD is PCIe 4.0 compliant?
 
I guess I'll stick with my hexacore x58 running on 2.0 :) and with my NVME my PC even more future proof!
 
PSA, FUTURE 3080 OWNERS:
DO NOT USE A CORE2 DUO/QUAD CPU.
THAT CPU SERIES WILL BE A HUGE BOTTLENECK*
Fixed that for you. The information stated in the above article shows that PCIe 2.0 isn't that great of a bottleneck. Additionally most of the C2D/C2Q series chipsets were PCIe 1.1 not 2.0 as you stated. The PCIe 2.0 standard was not adopted until the P4x, G4x, Q4x and X38/X48 chipsets which was late in the product lifecycle.
 
Back
Top