• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 4090 PCI-Express Scaling with Core i9-13900K

This comparison has no sense.
Alder and raptor uses PCI Express Gen5 and not Gen4.
What's the sense of this comparison?
 
Excuse me? It was a simple question. I'm not, as you say, whining about it. I just want to know why AMD was excluded from what many might refer to as one of the most important series of benchmarks to be featured on the Internet.

While it might've been true in the past, Intel is no longer the top dog in the industry. They have competition yet it seems nearly every publication and YouTube influencer uses Intel chips as their base in many of their benchmark rigs. Why? I'm not just calling out Wizzard here, I'm calling out... everyone in the benchmark space. Why always Intel?
Because you didn't read the article.

W1zzard said:
Roughly every one to two years we're updating our test system, so we took the opportunity to revisiting this PCI-Express performance scaling topic on our latest 2023 VGA Test Bench.

Upgrading the graphics card review test-bed is no small feat here at TechPowerUp, it involves testing 40 graphics cards across 25 game tests in rasterization, plus nine with ray tracing, all of those at three resolutions, with additional time spent to retest to correct testing errors due to suspicious results. The whole exercise typically takes up to several weeks. We are finally done with our upgrade, and our latest machine rocks an Intel Core i9-13900K "Raptor Lake" processor, an EVGA Z790 DARK motherboard, 32 GB of DDR5-6000 memory, and an ATX 3.0 power supply that natively supports 12VHPWR. This is a significant uplift in not just CPU compute muscle, but also IPC from the eight "Raptor Cove" P-cores.

Why did W1zz choose the 13900K? Because at the time he updated the test system it was the fastest gaming CPU in the world, since the X3D Zen 4 CPUs hadn't been released yet.

Thank you for the article. I have an z690 board and am installing a new heat sink on my gen 4 samsung ssd and at the same time was considering moving it to the gen 4 m.2 slot and leave the gen 5 just for my 4090. My pc is used for gaming and I don't see any reason to move the ssd unless for better airflow, which I may do anyway just for grins.
You SSD doesn't need a heatsink. Don't waste your time.

Am I understanding you correctly? I'm on z790 with a 13900k and a 4090. I currently have a Gen 4 m.2 in the slot closest to my CPU. My GPU is still running at x16. Are you saying it has to be a gen 5 m.2?
Yes.
 
Last edited by a moderator:
The RTX 4090 is PCI-E Gen 4, that's why. There is no Gen 5 GPU in existence to test.

When a Gen 5 GPU exists then there will be Gen 5 scaling tests done.
The test is non sense.
If the purpose is to test the performance loss of the combination of a SSD with a GPU they should have tested using pcie5.

Current CPUs are pcie5, they should have included a pcie5 8x test, in this way, it's simply useless and a waste of time of both the author and the people who reads the article.
 
The test is non sense.
If the purpose is to test the performance loss of the combination of a SSD with a GPU they should have tested using pcie5.

Current CPUs are pcie5, they should have included a pcie5 8x test, in this way, it's simply useless and a waste of time of both the author and the people who reads the article.
No PCIe 5 x8 GPU exists, can I borrow your time machine? and I also need a x16 one, too, so we can quantify the delta between x8 and x16
 
No PCIe 5 x8 GPU exists, can I borrow your time machine? and I also need a x16 one, too, so we can quantify the delta between x8 and x16
The reason of the article is to show how a GPU is limited by current gen CPUs when pairing them with an SSD of with a reduce bandwidth slot at 8x

Current CPUs used pcie5 slots so the article is a non sense.

You don't need a pcie5 GPU for this test, you can simply show that a pcie4 GPU has no reduction in performance while using a pcie5 slot at 8x

I repeat, this article is a waste of time and completely useful
 
The reason of the article is to show how a GPU is limited by current gen CPUs when pairing them with an SSD of with a reduce bandwidth slot at 8x

Current CPUs used pcie5 slots so the article is a non sense.

You don't need a pcie5 GPU for this test, you can simply show that a pcie4 GPU has no reduction in performance while using a pcie5 slot at 8x

I repeat, this article is a waste of time and completely useful

Did you register here just to throw a tantrum? Your parents must be proud of you.

If you connect a Gen5 SSD, your Gen4 GPU will be running at 4.0 x8. Your Gen3 GPU will be running at 3.0 x8. The lanes are cut in half, the generation doesn't matter.

This test shows that you basically don't lose any GPU performance when using a Gen5 SSD. It's simulated, but you don't need a Gen5 SSD for this test, because you can set the PCI-E speed in the BIOS. So 3.0 x16 will give you the same result as 4.0 x8.
 
Did you register here just to throw a tantrum? Your parents must be proud of you.

If you connect a Gen5 SSD, your Gen4 GPU will be running at 4.0 x8. Your Gen3 GPU will be running at 3.0 x8. The lanes are cut in half, the generation doesn't matter.

This test shows that you basically don't lose any GPU performance when using a Gen5 SSD. It's simulated, but you don't need a Gen5 SSD for this test, because you can set the PCI-E speed in the BIOS. So 3.0 x16 will give you the same result as 4.0 x8.
No it's not that way that it works.

If I connect a pcie5 SSD, GPU will work at pcie5 8x and SSD will work at pcie5 8x even if it uses only 4x.
 
I see the children are here to whine as usual about "WhY DidN'T yOU RuN thIS on amD CPu" SHUT UP AND SIT DOWN. You don't bother to understand or care how much time and effort Wizz puts into running these benchmarks and providing the results FOR FREE, if you want AMD CPU benchmarks then run them yourself.


AMD didn't gimp it, they took a GPU that was designed to be used as a dGPU in laptops - connected to the CPU over 4 dedicated lanes of PCIe - and put it on a PCIe card, so they had something below 6600 to compete with Arc and 1650/3050. But it turns out that a low- to mid-range GPU, with a lower amount of VRAM, needs to transfer a lot more data over the PCIe bus, and a PCIe x4 link absolutely doesn't cut it in that scenario. On top of that, the 6500 XT GPU is also missing many features (because it was expected that the CPU it was coupled to would have them), that makes it even more of a disappointment.

The 6500 XT's "predecessor", the 5500 XT, was designed for desktop with a PCIe x8 link, and worked pretty well as a result. I still don't know why AMD didn't do a rebrand of the 5500 XT for the 6500 XT, instead of trying to fit a square peg into a round hole - it's not like AMD or NVI*DIA are strangers to rebranding old GPUs as new when necessary.
I think it's probably also worth noting that AMD's memory management on dGPUs appears to be less refined than Nvidia's, even with equal PCIe bus the performance on Nvidia cards tends to degrade much more gracefully as it runs into VRAM issues. The mechanism isn't really clear to me, NV could be automatically culling texture detail, but the FPS numbers rarely become as erratic as quickly as on AMD parts.
 
If I connect a pcie5 SSD, GPU will work at pcie5 8x and SSD will work at pcie5 8x even if it uses only 4x.
No, the GPU will work at x8 with whatever PCIe capability it supports. You can't magically add new capabilities like that

Yes, I have a PCIe 5.0 SSD, engineering sample from Phison, the one with the small fan
 
No it's not that way that it works.

If I connect a pcie5 SSD, GPU will work at pcie5 8x and SSD will work at pcie5 8x even if it uses only 4x.

Your Gen4 x16 GPU will work as Gen5 x8?

Well, I guess there's no point in continuing this conversation. You should google some stuff up.
 
I haw motherboards with 1, 2 generation.
For movies and music usage.
 
If I connect a pcie5 SSD, GPU will work at pcie5 8x and SSD will work at pcie5 8x even if it uses only 4x.

No, the GPU will be at Gen 4 because it only supports operating at Gen 4. Your graphics card does not magically become capable of operating at Gen 5 link speeds simply because you have Gen 5 on your CPU or SSD, that is not how PCI Express link training works. Remember; the generation number only tells you what the link bandwidth is PER LANE. A PCI-E Gen 4 device operates at 16GT/s per lane MAXIMUM. It cannot operate at 32GT/s because it is not designed to do so, and it will automatically only receive 16GT/s when plugged into a Gen 5 platform.

1678124591885.png
 
This comparison has no sense.
Alder and raptor uses PCI Express Gen5 and not Gen4.
What's the sense of this comparison?
Alder Lake and Raptor Lake motherboards do have PCI Gen5 x16 slot but, unfortunately, their owners can do literally nothing with it until late 2024, GPU-wise. Some high-end boards offer bifurcation for NVMe Gen5 drives or entire AIC for NVMe Gen5 drives, to "make sense" of those Gen5 lanes.

You should be rather asking the question whether those Alder Lake motherboards with Gen5 GPU slot make any sense in 2021, 2022 and 2023? By the time Gen5 peripherals become more mainstream, in 2024 and onwards, many high-end Alder Lake motherboard owners will surely want to buy a new motherboard as Intel will move to 1851 socket.
 
alright, can we all like, rlx

there is no tangible performance gains from a pcie 4.0x4 ssd, let alone 5.0 - the way nand works (not bit-addressible) the bottleneck's simply not at the bus.

just keep running your gpu x16 and enjoy your (free) 2% (up to 7%) performance.
AND save on your ssd asw. something like an sn570's all you'll ever need.
 
Yes, I have a PCIe 5.0 SSD, engineering sample from Phison, the one with the small fan
It must be a tremendous privilege these days to have Gen5 NVMe drive with a small fan.

If I connect a pcie5 SSD, GPU will work at pcie5 8x and SSD will work at pcie5 8x even if it uses only 4x.
Just think about how bifurcation works dude. Google it, educate yourself and listen to what members above wrote to you.
1. If you connect Gen5 SSD, first GPU slot will be capable of working at Gen5 x8 speed, only if you had a GPU that supports PCIe 5.0. There is none until 2024.
2. NVMe SSD uses x4 connection. If you have NVMe AIC Gen5 x8 for two Gen5 SSDs to connect to the second x16 slot, it will be capable of working as Gen5 x8 device only if you insert two gen5 SSDs. One SSD will work as Gen5 x4.
 
Seems like all we got from PCI-E 5.0 is more expensive motherboards.

GPUs don't even need 4.0 for gaming. Pointless marketing scheme for a feature that should be reserved for professional applications.
6.0 on the way which will require an ECC chip as well. Looks like no sign of slowing down on the PCI-E train.
 
GPUs don't even need 4.0 for gaming. Pointless marketing scheme for a feature that should be reserved for professional applications.
They do need Gen4. PCIe 3.0 x16 (4.0 x8) has been saturated, by a whisker. That's the finding of this test and previous scaling tests from last year. Average loss in performance is ~2%, with some variation across games.
The saturation point is between Gen4 x8 and Gen4 x16, much closer to x8.

I just want to know why AMD was excluded from what many might refer to as one of the most important series of benchmarks to be featured on the Internet.
There are several scaling testing reviews with AMD CPUs and GPUs, both on TPU and other tech outlets, such as Hardware Unboxed.
Find it dude. Don't shout around before you look around. Simple. I am telling you this as owner of several Intel and AMD systems at home and at work.
 
They do need Gen4. PCIe 3.0 x16 (4.0 x8) has been saturated, by a whisker. That's the finding of this test and previous scaling tests from last year. Average loss in performance is ~2%, with some variation across games.

I doubt this is a question of bandwidth. More like latency or overhead. But it's still a minimal difference. If you look at the previous PCI-E scaling tests, the results are always similar, with each generation being slightly slower.

The 3080 test from 2020 shows the card being perfectly usable on PCI-E 2.0. Even 1.1 was just 13% behind 4.0. The 4090 is twice as fast, but the difference is only slightly bigger.

I expect the 5090 to be within a 5% margin when using 3.0 x16.
 
Good to know 8x 2.0 still manages to hold up somehow, through the power of magic

Is it assumed that 4x 3.0 and 8x 2.0 would perform the same, or has that been verified in the past?

The 5800X rig was boot from MBR
That disables REBAR?
Seems like REBAR would be what benefits from extra bandwidth the most, other than DirectStorage
 
Is it assumed that 4x 3.0 and 8x 2.0 would perform the same, or has that been verified in the past?
In Hardware Unboxed tests, Gen3 x4 loses significant amount of performance. This is the same bandwidth as Thunderbolt 4 for external GPUs.
 
is X12 4.0 perform as X16 ?
 
is X12 4.0 perform as X16 ?
While the PCIe spec allows x12 in theory, I'm not aware of any device that supports x12, only x16 or x8
 
In Hardware Unboxed tests, Gen3 x4 loses significant amount of performance. This is the same bandwidth as Thunderbolt 4 for external GPUs.

From what I remember, PCI-E bandwidth makes a huge difference when the card runs out of VRAM.

This was shown on the 6500 XT 4 GB which was limited to 4 lanes. On PCI-E 3.0, you could lose as much as 50% performance in certain games.
The 4090 is an infinitely faster card, yet it does fine even on PCI-E 1.1, as it never runs out of memory.

The 3050 is perfectly fine on PCI-E 3.0, with 8 lanes and 8 GB of VRAM.
 
Video summary of the article

 
Back
Top