# NVIDIA GeForce RTX 3080 PCI-Express Scaling



## W1zzard (Sep 16, 2020)

NVIDIA Ampere finally brings PCI-Express 4.0 support to the high-end graphics market. The new interface promises twice the bandwidth of PCI-Express 3.0. We've setup an AMD Ryzen 3900XT system to test how various PCIe generations and lane widths affect gaming performance.

*Show full review*


----------



## deu (Sep 16, 2020)

Practically no bottleneck even on PCI-E 2.0! :0 PCI-E is holding up quiet well; GPU manifactures get your s*** together and bottleneck them faster!


----------



## BorisDG (Sep 16, 2020)

Was expected PCI-E 3.0 to be enough.


----------



## Jism (Sep 16, 2020)

And even if the PCI-E 3.0/2.0 was "limited" in some way, you could always throw a marginal OC towards it, i.e 110Mhz which increases the bandwidth even more.


----------



## TechLurker (Sep 16, 2020)

Considering how well the GPU still tends to scale even on 2.0 and even 1.0 in some cases, I'd like to see, just mostly for fun, how much FPS is possible in an ancient rig running 1.1 PCIe and the top CPU of the time period at 1080p minimum (an extreme case of blowing the budget on GPU-only upgrades for old rigs). Given that earlier Ryzens ran on PCIe 2.0, it's not too surprising to see GPUs still able to provide respectable numbers on PCIe 2.0 in still a fairly modern setup.

That said, I could still see FPS chasers waving around these graphs and insisting that they absolutely must upgrade to Ryzen 3000 or the upcoming 4000 NOW for that extra .5% FPS boost. Which would perfectly benefit AMD's CPU division and those mobo companies slightly burned by Intel's delay on 4.0 capable CPUs (from a video where GN mentioned the topic).


----------



## jeremyshaw (Sep 16, 2020)

Any chance of frametime/lows analysis? It was done in the main RTX 3080 article, and it wouldn't have to be too broad - just covering max PCIe 3.0, 4.0 configs would be more than enough to conclusively settle the matter in a pre-DirectStorage timeframe.


----------



## Charcharo (Sep 16, 2020)

I think we need 1% Lows on these since those tend to be slightly more impacted by PCIE bandwidth.


----------



## Jism (Sep 16, 2020)

TechLurker said:


> Considering how well the GPU still tends to scale even on 2.0 and even 1.0 in some cases, I'd like to see, just mostly for fun, how much FPS is possible in an ancient rig running 1.1 PCIe and the top CPU of the time period at 1080p minimum (an extreme case of blowing the budget on GPU-only upgrades for old rigs). Given that earlier Ryzens ran on PCIe 2.0, it's not too surprising to see GPUs still able to provide respectable numbers on PCIe 2.0 in still a fairly modern setup.
> 
> That said, I could still see FPS chasers waving around these graphs and insisting that they absolutely must upgrade to Ryzen 3000 or the upcoming 4000 NOW for that extra .5% FPS boost. Which would perfectly benefit AMD's CPU division and those mobo companies slightly burned by Intel's delay on 4.0 capable CPUs (from a video where GN mentioned the topic).



You should understand that PCI-E 4.0 was released and 5.0 is coming primarily due to the enterprise demands / standards. Not for gamers. AMD is the only one that implemented PCI-E 4.0 into their consumer products because they are not going to drive 2 different CPU lines with different chipsets. They use IP for both standards where quality per product is different.

PCI-E 3.0 is'nt even fully taxed, and if it was you could still overclock it increasing it with another x16 times your OC'ed speed. But on the other side of the spectrum, if you had a system now with 2x NVME SSD's tapping into the 4.0 bus including a graphics cards, now those situations would be different as PCI-E 4.0 would actually come in handy.

But it's all within ~ 1% of margin; nothing to worry about either. I dont miss much when i play at either 186 fps or 194 fps.


----------



## BorisDG (Sep 16, 2020)

Jism said:


> But on the other side of the spectrum, if you had a system now with 2x NVME SSD's tapping into the 4.0 bus including a graphics cards, now those situations would be different as PCI-E 4.0 would actually come in handy.


That's why HEDT Intel platform come in handy with it's many PCIe lanes.


----------



## dirtyferret (Sep 16, 2020)

I can squeeze out an extra 1.1% FPS via PCIE 4.0?! THIS CHANGES EVERYTHING!


----------



## Aretak (Sep 16, 2020)

Really need to see results for the games that other outlets have found to suffer when dealing with limited PCIe bandwidth, like Horizon: Zero Dawn. Some titles seem to lose a lot more performance than others.


----------



## BorisDG (Sep 16, 2020)

Aretak said:


> Really need to see results for the games that other outlets have found to suffer when dealing with limited PCIe bandwidth, like Horizon: Zero Dawn. Some titles seem to lose a lot more performance than others.


Horizon is terrible port. Stutters as hell. The game is PS4 era so I really doubt it matters PCIe gen... See Death Stranding with same engine (probably even modernizied, because it's newer) performs excellent.


----------



## HenrySomeone (Sep 16, 2020)

I love how the "we only cater to AMD fanboys these days" Hardware Unboxed used a Ryzen test platform despite it being almost 10% slower even at 1440p, lmao!


----------



## phill (Sep 16, 2020)

Thank you @W1zzard


----------



## Cheeseball (Sep 16, 2020)

It looks like the main advantages of PCI-E 4.0 are only for storage use. Makes sense if you do a lot of sequential data transfer.


----------



## Vader (Sep 16, 2020)

HenrySomeone said:


> I love how the "we only cater to AMD fanboys these days" Hardware Unboxed used a Ryzen test platform despite it being almost 10% slower even at 1440p, lmao!


Apparently you missed the community post where they asked people which platform to test RTX 3000 with. The amd config was what people wanted to see


----------



## HenrySomeone (Sep 16, 2020)

Nope, seen it and that's just it - "what people wanted to see" (those people undoubtedly being team red fans to a worrying degree - that channel is now little better than the laughably biased Moore's law is Dead and the likes) instead of what is objectively better. I love how Steve Gamers Nexus made a subtle but unmistakable reference to it by saying that they made sure to use the "fastest cpu for every appropriate test"


----------



## Papahyooie (Sep 16, 2020)

HenrySomeone said:


> I love how the "we only cater to AMD fanboys these days" Hardware Unboxed used a Ryzen test platform despite it being almost 10% slower even at 1440p, lmao!



Literally on the first page of the review:
"Since 3rd Gen Ryzen is the only desktop processor series with PCIe Gen 4 as of this writing "

How exactly are you going to test PCI-E 4.0 scaling on the competition, bud?


----------



## dirtyferret (Sep 16, 2020)

HenrySomeone said:


> I love how Steve Gamers Nexus made a subtle but unmistakable reference to it by saying that they made sure to use the "fastest cpu for every appropriate test"


Steve calling out another site and proclaiming himself better then that site?  Ridiculous!  Next you will be telling us he went an entire 20 seconds without brushing his hair out of his face in one of his videos!


----------



## Bubster (Sep 16, 2020)

NVIDIA knows that that pcie 4 will matter more for RTX IO later on pcie 3 @ 32gb/s is still fast. i load games from a raid 0 SSD sata drive under 10 seconds at  a fifth of the time it took with legacy 7200 rpm HDD. i can imagine it will be much faster with  pcie 3 nvme at least double the speed that i have...the only deterrence is the cost of 2tb or more NVME M2 ssd.


----------



## HenrySomeone (Sep 16, 2020)

Papahyooie said:


> Literally on the first page of the review:
> "Since 3rd Gen Ryzen is the only desktop processor series with PCIe Gen 4 as of this writing "
> 
> How exactly are you going to test PCI-E 4.0 scaling on the competition, bud?


No bud, not for the reasons of pcie scaling - Hardware Unboxed tested the 3080 *exclusively *on Ryzen!


----------



## Papahyooie (Sep 16, 2020)

HenrySomeone said:


> No bud, not for the reasons of pcie scaling - Hardware Unboxed tested the 3080 *exclusively *on Ryzen!


You're commenting on a review of PCI-E 4 scaling.... Who cares what hardware unboxed did?


----------



## juular (Sep 16, 2020)

jeremyshaw said:


> Any chance of frametime/lows analysis? It was done in the main RTX 3080 article, and it wouldn't have to be too broad - just covering max PCIe 3.0, 4.0 configs would be more than enough to conclusively settle the matter in a pre-DirectStorage timeframe.


This ! Average framerate data isn't nearly as relevant as frametime analysis when all that changes is the interface bandwidth, can't believe authors of this article didn't thought of that. I imagine we see such a big difference with PCIe 1.0 x8 just because it stutters as hell trying to move all that data through.


----------



## HenrySomeone (Sep 16, 2020)

Papahyooie said:


> You're commenting on a review of PCI-E 4 scaling.... Who cares what hardware unboxed did?


They did this exactly on the pretext of "needing a pcie 4.0 platform" and it's no use commenting there as the channel is absolutely flooded with rabid fanboys.


----------



## Papahyooie (Sep 16, 2020)

HenrySomeone said:


> They did this exactly on the pretext of "needing a pcie 4.0 platform" and it's no use commenting there as the channel is absolutely flooded with rabid fanboys.


That's completely irrelevant to the thread here.


----------



## HenrySomeone (Sep 16, 2020)

Is it? It's about the very subject matter, is it not? 

_Edited out parts not in compliance with forum guidelines. Please try to avoid such comments in the future. - TPU Moderation_


----------



## lexluthermiester (Sep 16, 2020)

juular said:


> Average framerate data isn't nearly as relevant as frametime analysis when all that changes is the interface bandwidth, can't believe authors of this article didn't thought of that.


They did. It's not relevant enough to warrant a detailed analysis.

What would be interesting, and I mentioned this in another thread already, would be a run of tests that show performance on actual period correct hardware. However, it is possible that @W1zzard does not have sample hardware available for such a series of tests. Not that it is critical. The information rendered in this article gives a good reference point to understand the limitations of each PCIe spec.

Still, it would be interesting to see the effect other potential limitations have on the result. For example, CPU, chipset and RAM throughput. The PCIe bus spec is only one part of that equation.


----------



## InVasMani (Sep 16, 2020)

I don't think PCIe 4.0 is a big deal yet, but I do think that bandwidth will more readily be taken advantage of going forward. Cache acceleration integrated into GPU design has big potential especially for AMD who's got all the IP already available to leverage it extremely well. Intel in some ways has even more IP to leverage that sort of thing due to Optane, but depends how you look at it because Optane is inferior to DDR4 for example in terms of sheer speed which in this scenario is more vital. I'd really love to see how far a individual DDR4 very well binned chip can scale paired with a Zen 3 CPU perhaps a 2c or 4c variant. Something cut down designed simply for cache acceleration, decompression, and compression needs on a GPU. It does seem a ARM acquisition could shake things up a fair bit Nvidia will then have the ability to do similar w/o resorting to licensing ARM chip designs. Intel as well can obviously so and with added addition of a cache layer of Optane not to mention they did some interesting stuff with the desktop Broadwell chip prior to Skylake on the cache side of things with the EDRAM in this type of scenario incorporating a bit of that might work great for it's intended use.


----------



## Kissamies (Sep 16, 2020)

Great review as always with these PCIe scaling reviews.


----------



## lexluthermiester (Sep 16, 2020)

Ok, here's a video testing a 3080 with an FX CPU(which are all PCIe 2.0). Should have known Greg would do a test like this;


----------



## Rob94hawk (Sep 16, 2020)

He's testing 4.0 because the AMD platform is the only one with pcie 4.0!

_Edited out parts not in compliance with forum guidelines. Please try to avoid such comments in the future. - TPU Moderation_


----------



## The Von Matrices (Sep 16, 2020)

So that leaves me with a conundrum.  I have an Intel desktop platform with only 16 PCIe 3.0 lanes.  Do I run the GPU at x8 and put my SSD on the remaining CPU lanes or do I run the GPU at x16 and put the SSD on the slower PCH lanes?  It's not a simple answer if the GPU is reading diectly from the SSD and both need the bandwidth.  This article says I lose about 3% when I reduce the GPU to x8 but I don't know how much the SSD benefits.


----------



## good11 (Sep 17, 2020)

When RTX 3090 is out, Please test PCI-E 4.0 vs 3.0 again with RTX 3090 SLI.


----------



## grammar_phreak (Sep 17, 2020)

Is there going to be a PCI-E Scaling benchmark for the RTX 3090?
What I'd like to see is if there is any variation at 8k, since 4k is getting taxed the most.


----------



## Frick (Sep 17, 2020)

Cheeseball said:


> It looks like the main advantages of PCI-E 4.0 are only for storage use. Makes sense if you do a lot of sequential data transfer.



Also dual port 40Gb/s NIC's, which uses x8 as standard.









						Feeding the Data Beast with PCIe 4.0 | Mellanox Technologies Blog
					

Summary Artificial Intelligence, Virtual Machines, containerization, and 5G mobile wireless networks are key drivers for next-generation high-performance systems. However, current servers with PCIe Express (PCIe) 3.0 require wide busses to keep up with the latest Ethernet or InfiniBand speeds or...



					blog.mellanox.com


----------



## geon2k2 (Sep 17, 2020)

HenrySomeone said:


> No bud, not for the reasons of pcie scaling - Hardware Unboxed tested the 3080 *exclusively *on Ryzen!



Thats because they had a poll with their viewers, and 88% or so choose to benchmark 3080 with Ryzens. 
Which makes sense seing most budget conscious DIY users jumped to Ryzens.


----------



## EarthDog (Sep 17, 2020)

Charcharo said:


> I think we need 1% Lows on these since those tend to be slightly more impacted by PCIE bandwidth.


Maybe im understanding it wrong, but why would the 1% lows of fps use more bandwidth?


----------



## lexluthermiester (Sep 17, 2020)

EarthDog said:


> Maybe im understanding it wrong, but why would the 1% lows of fps use more bandwidth?


They wouldn't.


----------



## mahirzukic2 (Sep 17, 2020)

The Von Matrices said:


> So that leaves me with a conundrum.  I have an Intel desktop platform with only 16 PCIe 3.0 lanes.  Do I run the GPU at x8 and put my SSD on the remaining CPU lanes or do I run the GPU at x16 and put the SSD on the slower PCH lanes?  It's not a simple answer if the GPU is reading diectly from the SSD and both need the bandwidth.  *This article says I lose about 3% when I reduce the GPU to x8* but I don't know how much the SSD benefits.


So to put it in anothe way. If you gain ~2-3% by going AMD with PCIe 4.0, you get those 2-3% and close out the battle with intel cpus on 1440p and 4k on current Ryzen 30xx processors.
I guess the difference will be even larger with the usage of the new Zen 3 Ryzen 40xx or 50xx processors (which ever they name them).


----------



## rodneyhchef (Sep 17, 2020)

TechLurker said:


> Considering how well the GPU still tends to scale even on 2.0 and even 1.0 in some cases, I'd like to see, just mostly for fun, how much FPS is possible in an ancient rig running 1.1 PCIe and the top CPU of the time period at 1080p minimum (an extreme case of blowing the budget on GPU-only upgrades for old rigs). Given that earlier Ryzens ran on PCIe 2.0, it's not too surprising to see GPUs still able to provide respectable numbers on PCIe 2.0 in still a fairly modern setup.
> 
> That said, I could still see FPS chasers waving around these graphs and insisting that they absolutely must upgrade to Ryzen 3000 or the upcoming 4000 NOW for that extra .5% FPS boost. Which would perfectly benefit AMD's CPU division and those mobo companies slightly burned by Intel's delay on 4.0 capable CPUs (from a video where GN mentioned the topic).



If anyone wants to send me a 3080 i'll happily test on my "last-of-the pci-e 2.0" 2600k 

I think it's probably time to upgrade. I'm pretty sure this hardware all belongs in the retro forum now!



lexluthermiester said:


> Ok, here's a video testing a 3080 with an FX CPU(which are all PCIe 2.0). Should have known Greg would do a test like this;



I watched this vid with interest as my CPU is from same era. I ran the timespy extreme test on my rig and this is my result

Big difference in the CPU score!


----------



## Chrispy_ (Sep 17, 2020)

Wow, PCIe 2.0 is finally starting to show a measurable performance penalty, provided you try and plug an $800 graphics card into a board from Core2/PhenomII era.



*PSA, FUTURE 3080 OWNERS: 
DO NOT USE A CORE2 DUO.
THE PCIe 2.0 BANDWIDTH WILL BE A BOTTLENECK* *



* - I think there may be some other bottlenecks too.


----------



## Rob94hawk (Sep 17, 2020)

Chrispy_ said:


> Wow, PCIe 2.0 is finally starting to show a measurable performance penalty, provided you try and plug an $800 graphics card into a board from Core2/PhenomII era.
> 
> 
> 
> ...



Going to plug my 3090 into my old socket 775 system when it gets here. I'll let you know how it works out.


----------



## eskwy911 (Sep 17, 2020)

And what about temps ?


----------



## EarthDog (Sep 17, 2020)

geon2k2 said:


> Thats because they had a poll with their viewers, and 88% or so choose to benchmark 3080 with Ryzens.
> Which makes sense seing most budget conscious DIY users jumped to Ryzens.


Maybe looking for validation of platform superiority, but turns out its useful in other areas most don't utilize.


eskwy911 said:


> And what about temps ?


Look at the review of the GPU, not the PCIe scaling article. 









						NVIDIA GeForce RTX 3080 Founders Edition
					

NVIDIA's new GeForce RTX 3080 "Ampere" Founders Edition is a truly impressive graphics card. It not only looks fantastic, performance is also better than even the RTX 2080 Ti. In our RTX 3080 Founders Edition review, we're also taking a close look at the new cooler, which runs quietly without...




					www.techpowerup.com


----------



## The Von Matrices (Sep 17, 2020)

mahirzukic2 said:


> So to put it in anothe way. If you gain ~2-3% by going AMD with PCIe 4.0, you get those 2-3% and close out the battle with intel cpus on 1440p and 4k on current Ryzen 30xx processors.
> I guess the difference will be even larger with the usage of the new Zen 3 Ryzen 40xx or 50xx processors (which ever they name them).


That's not quite what I was referring to.  I'm talking about PCIe 3.0 x8 vs PCIe 3.0 x16.  When you go to PCIe 4.0, the difference between x8 and x16 is negligible.  But most people with Intel platforms just run the GPU at PCIe 3.0 x16 and don't worry about the SSD's bandwidth on the PCH.  I'm wondering if I should worry about SSD bandwidth.


----------



## mkontra (Sep 17, 2020)

Why not benchmark loading times? That's when data are sent to the GPU. Isn't it more relevant than FPS?


----------



## EarthDog (Sep 17, 2020)

mkontra said:


> Why not benchmark loading times? That's when data are sent to the GPU. Isn't it more relevant than FPS?


Nope. Not relevant at all. Loading times of games is primarily a hard drive limitation. vRAM is faster than any NVMe SSD by leaps and bounds.


----------



## mkontra (Sep 17, 2020)

EarthDog said:


> Nope. Not relevant at all. Loading times of games is primarily a hard drive limitation. vRAM is faster than any NVMe SSD by leaps and bounds.



But the level data has to move from CPU to GPU through PCIe lanes. Isn't it when there is a chance to saturate the PCIe bandwidth if the NVMe SSD is PCIe 4.0 compliant?


----------



## savior02 (Sep 17, 2020)

I guess I'll stick with my hexacore x58 running on 2.0  and with my NVME my PC even more future proof!


----------



## lexluthermiester (Sep 17, 2020)

Chrispy_ said:


> *PSA, FUTURE 3080 OWNERS: *
> *DO NOT USE A CORE2 DUO/QUAD CPU.*
> *THAT CPU SERIES WILL BE A HUGE BOTTLENECK**


Fixed that for you. The information stated in the above article shows that PCIe 2.0 isn't that great of a bottleneck. Additionally most of the C2D/C2Q series chipsets were PCIe 1.1 not 2.0 as you stated. The PCIe 2.0 standard was not adopted until the P4x, G4x, Q4x and X38/X48 chipsets which was late in the product lifecycle.


----------



## EarthDog (Sep 17, 2020)

mkontra said:


> But the level data has to move from CPU to GPU through PCIe lanes. Isn't it when there is a chance to saturate the PCIe bandwidth if the NVMe SSD is PCIe 4.0 compliant?


A GPU plays little role in game level load times. Perhaps it is faster because of the bandwidth, but that isn't the GPU. Unless, I don't know what you're saying?


----------



## Calmmo (Sep 17, 2020)

He means MS direct storage, whenever that's coming (with nvidia branding for nvidia cards..)
That's still quite far away, maybe wait for the first few games that support it before jumping to conclusions or making assumptions as to how much extra PCIe bandwidth that will require.


----------



## Luckz (Sep 18, 2020)

Bubster said:


> i load games from a raid 0 SSD sata drive under 10 seconds at  a fifth of the time it took with legacy 7200 rpm HDD.


You do realise RAID 0ing SSDs does not help your (game, Windows) performance in any way? A reasonable and cheap SATA SSD will have 20 or even 40 times the 4k Random Read (Q1/T1) performance of a random old HDD.
For NVMe it would if anything lose performance due to the added latency, since NVMe is pretty fast compared to SATA; with SATA RAID latency should not matter that much.


----------



## lexluthermiester (Sep 18, 2020)

Luckz said:


> You do realise RAID 0ing SSDs does not help your (game, Windows) performance in any way?


Not true at all. When doing data copy operations, should a user reach the cache threshold(which is not uncommon), the raid function does keep things flowing better. SSD RAID5 is even better for mitigating cache threshold problems.


----------



## Bubster (Sep 18, 2020)

lexluthermiester said:


> Not true at all. When doing data copy operations, should a user reach the cache threshold(which is not uncommon), the raid function does keep things flowing better. SSD RAID5 is even better for mitigating cache threshold problems.


Raid 0 Performance is fantastic compared to single Sata SSD or worse before back in the days of HDD. much faster loading times by a large multiple...anyone that says otherwise obviously did not experience this ... putting game folders or other large files in SSD is worth every penny. even better in a raid config. double the speed nearly 5 to 10 times faster than even the good toshiba 220 mb/s HDD.


----------



## Charcharo (Sep 18, 2020)

EarthDog said:


> Maybe im understanding it wrong, but why would the 1% lows of fps use more bandwidth?



PCIE bandwidth required varies on a scene by scene basis. It isn't fully linear. Its like CPU performance in a sense, a longer benchmark with just average will smooth out the drops. Of course, its not as important as the CPU itself, so it will never be as bad as say a 4/4 running a modern game during an intense action scene, but still.

IDK how people dont know these things. I guess they stare at benches and dont actually play the games.


----------



## lexluthermiester (Sep 18, 2020)

Charcharo said:


> PCIE bandwidth required varies on a scene by scene basis. It isn't fully linear.


Rubbish & nonsense. PCIe bandwidth is perfectly linear, literally by design.


----------



## kiriakost (Sep 18, 2020)

I am transferring this remark from the article for further analysis.
*... the bottleneck remains with the "CPU processing power".*

CPU processing power will be wasted on gaming only by what ever this is run in software,  the most known software this is Microsoft DirectX.
Now I will made the wild assumption that when most DirectX commands those executed by the GPU, them the CPU enjoying less stress.

And now I need to ask of whom responsibility is the fact that CPU this gets stressed unimaginably instead  of the GPU?  some one did a mistake here.
From the presented test results the comparison of  *Battlefield V* vs *Project Cars 3 *this works as proof to me, that Game developers they are responsible at 100% of any CPU caused bottleneck.

Some people forget that CPU this is two engines in one, the second engine this is the mathematical processor.
For example In a game scene with a forest, if you add 1000 trees in to the landscape, and you decide (as game developer) the math processor to assist the GPU, then you are getting responsible of your own choices.
CPU caused bottleneck will be there,  and the problem can be solved by transferring the cost of this mistake at the consumer pockets, some one must pay and buy the more powerful CPU.

Therefore what we have this is a pack of bad written games and a pack of better written Game titles,  *and now we need to award the Game developers* whom respect consumers wallet by them making fewer mistakes.

*Regarding RAW GPU power*, NVIDIA about the 3000 series it did make more noise than delivery of performance.
I will predict that when this will use* lithography down to six microns *this will deliver above 120 fps at 4K and 300fps at 1920x1080.
Personally I would care to buy such a dream card so to get old with it.

At my eyes the 3000 series this is another hot pan this using too much of electrical energy.
PCIe bandwidth load this gets increased when some one decides to share GPU load to CPU.


----------



## Chrispy_ (Sep 18, 2020)

lexluthermiester said:


> Fixed that for you.


You took my comment seriously, even with the footnoote call out? 

Whoosh!


----------



## lexluthermiester (Sep 18, 2020)

Chrispy_ said:


> You took my comment seriously, even with the footnoote call out?
> 
> Whoosh!


No one here is a mind reader. If you're going to be funny, make it obvious with a proper notation such as "/s". My edit still applies.


----------



## Charcharo (Sep 19, 2020)

lexluthermiester said:


> Rubbish & nonsense. PCIe bandwidth is perfectly linear, literally by design.



Bandwidth is linear. The requirements _per scene_ are *not *linear.

IDK if I worded it wrongly, but what I am saying *is* true.  Not every scene has the same geometry / raster / bandwidth / shader requirements. Not every scene is as heavy on PCIE bandwidth. I hope this makes it clearer.

EDIT:








You can even see this here. The RT (custom Vulkan-RT path by Nvidia) benchmark, despite the lower FPS, has higher bandwidth requirements on the PCIE bus than other benches.  And different games react differently too.


----------



## rtwjunkie (Sep 19, 2020)

My suspicion that 3.0 will still be good enough for awhile have been confirmed.  Once RTX-IO or DirectStorage become an actual thing in games we will see a need for 4.0 I think.



BorisDG said:


> Horizon is terrible port. Stutters as hell.


Funny, I don’t get those stutters.    Must be it’s not that bad of a port...or it would apply to everyone.


----------



## lexluthermiester (Sep 19, 2020)

Charcharo said:


> Bandwidth is linear. The requirements _per scene_ are *not *linear.


Thank you for correcting yourself.



rtwjunkie said:


> Funny, I don’t get those stutters.  Must be it’s not that bad of a port...or it would apply to everyone.


This. Just because one person has a problem doesn't automatically mean everyone will have that problem.


----------



## Charcharo (Sep 20, 2020)

lexluthermiester said:


> Thank you for correcting yourself.
> 
> 
> This. Just because one person has a problem doesn't automatically mean everyone will have that problem.



The original comment was easy to understand anyway. Only pedants would have a problem with it, but it is corrected


----------



## Papahyooie (Sep 21, 2020)

Charcharo said:


> PCIE bandwidth required varies on a scene by scene basis. It isn't fully linear.





lexluthermiester said:


> Rubbish & nonsense. PCIe bandwidth is perfectly linear, literally by design.





lexluthermiester said:


> Thank you for correcting yourself.



You don't realize he said it right the first time? "Bandwidth REQUIRED". He never said PCIe bandwidth isn't linear. Don't be so smug when you didn't read right.


----------



## lexluthermiester (Sep 22, 2020)

Papahyooie said:


> You don't realize he said it right the first time? "Bandwidth REQUIRED". He never said PCIe bandwidth isn't linear. Don't be so smug when you didn't read right.


Yes, reading comprehension does seem to be a problem here. But whatever, think what you wish..


----------



## BorisDG (Sep 22, 2020)

rtwjunkie said:


> My suspicion that 3.0 will still be good enough for awhile have been confirmed.  Once RTX-IO or DirectStorage become an actual thing in games we will see a need for 4.0 I think.
> 
> 
> Funny, I don’t get those stutters.    Must be it’s not that bad of a port...or it would apply to everyone.


Even after that, since RTX IO is supported by Turing which is PCIe 3.

About Horizon, I have zero issues aswell, but I watched many people streaming it (incl person with your CPU - 8700K and 1080 Ti in SLI) and they had weird jitter/stutter gameplay in more later playthrough. It's really weird and I haven't seen such before.

*Jittery* is the word, not stutter and I'm sorry, but at that time it doesn't came on my mouth.


----------



## Jay_ombie (Oct 3, 2020)

Can you guys just confirm whether or not I be able to run a 3080 on my system..

I am to are running quite old tech, but its officiant streamlined overclocked tech that's still providing excellent playability on a 144hz monitor.
I just could do with the extra power of the 3080 over my humble 1080ti for VR.

Main concern is my PCIe 2 slot.


Thanks.


----------



## INSTG8R (Oct 3, 2020)

lexluthermiester said:


> Thank you for correcting yourself.
> 
> 
> This. Just because one person has a problem doesn't automatically mean everyone will have that problem.


I use the FH4 benchmark as my daila driver because it’s super consistent sufficiently loads all components well and gives very granular results


----------



## lexluthermiester (Oct 4, 2020)

Jay_ombie said:


> Can you guys just confirm whether or not I be able to run a 3080 on my system..
> 
> I am to are running quite old tech, but its officiant streamlined overclocked tech that's still providing excellent playability on a 144hz monitor.
> I just could do with the extra power of the 3080 over my humble 1080ti for VR.
> ...


If it's the system in your specs, it'll run, but you will experience some CPU bottlenecking. Not in all games or at all resolutions, but it's going to happen.


----------



## EarthDog (Oct 4, 2020)

Jay_ombie said:


> Can you guys just confirm whether or not I be able to run a 3080 on my system..
> 
> I am to are running quite old tech, but its officiant streamlined overclocked tech that's still providing excellent playability on a 144hz monitor.
> I just could do with the extra power of the 3080 over my humble 1080ti for VR.
> ...


you'll lose a few/several/double digit percent at 1080p, sure. Between the couple % from pcie 2.0 and the notably lower IPC of the sandybridge cpu (even with it overclocked), it will make a difference in most titles. That said, surely you'll still reach 144hz/fps id imagine.


----------



## lexluthermiester (Oct 4, 2020)

EarthDog said:


> Between the couple % from pcie 2.0


PCIe 2.0 is not the main limiting factor.








The CPU and RAM in his system will be the the bigger bottleneck.


----------



## InVasMani (Oct 4, 2020)

Rob94hawk said:


> Going to plug my 3090 into my old socket 775 system when it gets here. I'll let you know how it works out.


 I'm sure it works fine actually. If you actually want to make use of the GPU's potential just be sure to shift the bottleneck to lean toward the GPU by pushing the graphics resolution which DSR is perfect for even at 1080p or 1440p that heavily slants it towards the GPU performance and away from any CPU bottleneck. I don't think anyone pairing that type of GPU on  775 would realistically do so expecting eleague high refresh rate gaming going over well that's much more latency sensitive and much less GPU dependent. I mean testing that GPU at 480p with a more modern CPU and high refresh rate isn't any better you're going to bottleneck the CPU more unrealistically and throw most of the perks of using a GPU like that out the window.


----------



## Chrispy_ (Oct 4, 2020)

Jay_ombie said:


> Can you guys just confirm whether or not I be able to run a 3080 on my system..
> 
> I am to are running quite old tech, but its officiant streamlined overclocked tech that's still providing excellent playability on a 144hz monitor.
> I just could do with the extra power of the 3080 over my humble 1080ti for VR.
> ...



It depends on the game; Some games are just fine with an old 2600K but many are not. Even if the _average_ FPS on old Intel quad cores looks okay, the minimum framerates are really bad compared to newer higher-core count CPUs with more cache and faster RAM. I moved one of my machines from a i7-3770K on DDR3-1600 to an R7 3700X on DDR4-3200 when they came out over a year ago and games that I thought were running okay suddenly ran much more smoothly when the action started getting busy.

We are seeing more games run badly on quad core CPUs than ever before, and with the PS4 and XBSX both having 8-core, 16-thread CPUs in them, that is a trend that will definitely continue.

If you can afford an RTX 3080 you can also afford to change your CPU/RAM/Motherboard. You should really wait a couple of months to see how Big Navi, Ryzen 5000-series, and the RTX 3070 perform but a 3700X, B550, and 16GB of DDR4-3600 is going to cost you _$250 less_ than a 3080 right now and it'll probably improve your gaming experience more than a 3080 by improving the minimum fps rather than the average. By November, the 3700X is probably going to be heavily discounted on sale, and the replacement is likely to be worth it over the old 3700X even at the expected $350 price point.

Steve has the answer:


----------



## EarthDog (Oct 4, 2020)

lexluthermiester said:


> PCIe 2.0 is not the main limiting factor...
> ...Yet The CPU and RAM in his system will be the the bigger bottleneck


I didnt say it was. I said, "between this (pcie 2.0) and that (SB IPC) it will be bottlenecked.








						NVIDIA GeForce RTX 3080 PCI-Express Scaling
					

NVIDIA Ampere finally brings PCI-Express 4.0 support to the high-end graphics market. The new interface promises twice the bandwidth of PCI-Express 3.0. We've setup an AMD Ryzen 3900XT system to test how various PCIe generations and lane widths affect gaming performance.




					www.techpowerup.com
				




4% to be exact. It's one part of the big picture. I even said a "couple" % minimizing that value. Nothing there infers that i thought it was the main concern. SB IPC comes into play (5Ghz SB isnt close comet lake at 5ghz) as well as memory speeds as you added. That doesn't count any games that are capped by a 4c/8t part...

If i went wrong it was underestimating how slow the cpu is and its ability to reach 144 fls (in some titles,ike sotr).


----------



## Jay_ombie (Oct 4, 2020)

lexluthermiester said:


> If it's the system in your specs, it'll run, but you will experience some CPU bottlenecking. Not in all games or at all resolutions, but it's going to happen.





EarthDog said:


> you'll lose a few/several/double digit percent at 1080p, sure. Between the couple % from pcie 2.0 and the notably lower IPC of the sandybridge cpu (even with it overclocked), it will make a difference in most titles. That said, surely you'll still reach 144hz/fps id imagine.





Chrispy_ said:


> It depends on the game; Some games are just fine with an old 2600K but many are not. Even if the _average_ FPS on old Intel quad cores looks okay, the minimum framerates are really bad compared to newer higher-core count CPUs with more cache and faster RAM. I moved one of my machines from a i7-3770K on DDR3-1600 to an R7 3700X on DDR4-3200 when they came out over a year ago and games that I thought were running okay suddenly ran much more smoothly when the action started getting busy.




Thanks guys,

I been thinking the computer route but there is a big difference (for me) in buying the £700 card compared to buying a whole new computer system plus the 3080 card which knocks it close to approx £1800.

The computer is old yet still plays exceedingly adequately the games I play at 1080p. Comparing my system with friends computers that were brought earlier this year or late 2019 using UserBenchmark my system is beating theirs hands down. Largely due to the 1080ti (oc)  but nevertheless its doing the job still but it will certainly depend on the games I choose to play and eventually I will need to throw in the towel and just get a new system.

Meantime my focus is for VR which requires a lot of GPU horse power above all else. I use a RIFT CV1 at present, so my thinking is all I need is like an extra 30/40% increase in FPS and I can play at 90fps which my current VR set is designed to max out at rather than the 45 FPS is tends to default to due to the taxing games (heavy modded SKyrim). 

I checked the CPU whilst playing and the cores hover around the 60/70% max mark so I still have some head room plus I helped my cause by over clocking my CPU to 4498.95MHz as well, its been at this clock speed for years (I overclock everything).

I am glad to at least hear that the limitation of the PCIe isn't going to hamper me as much as I was first worried about. Its just about balancing what I need minimum like compared to simply getting a new computer system for the sake of new tech and how many more frames I get over my old tech when I could technically just use a new GPU with old stuff and still gloriously sail in the fast lane with the games I play.


----------



## lexluthermiester (Oct 4, 2020)

Jay_ombie said:


> The computer is old yet still plays exceedingly adequately the games I play at 1080p


If you're only going to play at 1080p, your 1080ti is still an excellent card unless you want the RTX features. So for your situation, waiting for the 3060/3070 cards to come out would be a better idea, especially if you plan on upgrading the rest of your system later.


----------



## Jay_ombie (Oct 4, 2020)

lexluthermiester said:


> If you're only going to play at 1080p, your 1080ti is still an excellent card unless you want the RTX features. So for your situation, waiting for the 3060/3070 cards to come out would be a better idea, especially if you plan on upgrading the rest of your system later.



Thats my thinking, if I can at least grab the card - hack my case to squeeze it in and get some use from its booster clocks until I am able to commit to a full upgraded system. 
I also over clocked the overclock on the 1080ti so that's performing like the clappers to keep my VR bearable.


----------



## Chrispy_ (Oct 7, 2020)

Jay_ombie said:


> Thats my thinking, if I can at least grab the card - hack my case to squeeze it in and get some use from its booster clocks until I am able to commit to a full upgraded system.
> I also over clocked the overclock on the 1080ti so that's performing like the clappers to keep my VR bearable.


So you don't have the budget to upgrade the CPU and GPU at the same time? You now have to decide if you want to upgrade the GPU or the CPU first.

IMO, a new CPU with the existing 1080Ti will be faster/better than your old 2600K and an RTX3080.

Buying a 3080 right now is a bad idea anyway - it's extremely hard to get one _at all_, and you're likely to be price-gouged. You will definitely need to spend another $150 on a power supply as you're going to want a new one for a 3080 with at least 60A on a single 12V rail (and I'd be surprised if your 9-year-old OCZ unit can still do that). Once you get it all sorted you'll fire up a game and the fps counter will show higher numbers but when it felt bad due to CPU slowdowns before, it'll still feel exactly as bad now and you'll have about a $1000 hole in your wallet and a small space heater burning through your power bills.

Unless you have a 4K montor (or high-refresh 1440p monitor) there's literally zero point in buying a 3080. Get the monitor and CPU first, then upgrade the 1080Ti with whatever's best at that point in time. Second-guessing what you'll want in the distant future but paying a premium up front for it right now is a choice you're entitled to make but not one I can recommend. Let's say the 3080's 10GB is relevant for the next 24 months. Are you going to have a 4K monitor and new CPU in the next 12 months? If not, you can bet that the 3080 won't cost $700 and won't have availability/stability issues 12 months from now, and there's a non-zero chance that the 3080 won't even be the best choice of graphics card by then.


----------



## Jay_ombie (Oct 8, 2020)

Chrispy_ said:


> So you don't have the budget to upgrade the CPU and GPU at the same time? You now have to decide if you want to upgrade the GPU or the CPU first.
> 
> IMO, a new CPU with the existing 1080Ti will be faster/better than your old 2600K and an RTX3080.
> 
> ...


----------



## Chrispy_ (Oct 8, 2020)

For the record, I have a Rift DK2 and a Samsung Odyssey HMD and my 2070S is more than up to the job. The problem with VR is minimum framerates, something that your 1080Ti is not to blame for.

Without a doubt, the 1080Ti is fine for current VR titles, it's your ancient DDR3 and Sandy Bridge platform holding everything up.


----------



## Rebe1 (Jan 18, 2021)

I somehow missed this article. @W1zzard question to you: if you could guess / estimate general performance difference (roughly) in games between pcie 3.0 x16 vs pcie 3.0 x4 using RTX 3080 - what would it be? I have rx 6800 nonXT and with R5 5600x @B550 motherboard it is ~3000 pts on TimeSpy (graphics score)...


----------



## W1zzard (Jan 19, 2021)

X4 3.0 is pretty much equal to x16 1.1, you can use those results from the review


----------

