# Is the need for PCIE 4.0 in Terms of Graphics Cards Really Several Years away?



## WHOFOUNDFUNGUS (Dec 30, 2019)

Buildzoid thinks so. I'm inclined to disagree. Although it is true that there are no end user GPUs requiring gen 4 at the domestic level this doesn't mean that the system cannot benefit over all from PCIe Gen 4. Bandwidth is bandwidth after all. If the system benefits from having more bandwidth even the graphics card will work better than a system that is starved for bandwidth. In fact, I believe someone in Germany recently proved this to be the case. Anyway, that's my view. What do you think?


----------



## oxrufiioxo (Dec 30, 2019)

For low ends cards that have 4GB of Vram it can help, but for high end GPU with 8GB+ it's going to be at least 2 generations before it matters unless you have a weird situation where you only have 8x pcie lanes available for the GPU By this time PCIE gen 5 might be a thing.


----------



## WHOFOUNDFUNGUS (Dec 30, 2019)

Good point. I've been wondering about gen 5 myself. Lots of whispers on the wind and it may well be that we'll be seeing it in a couple of years in the end user world instead of waiting a decade for it like we did with gen 3 for gen 4.


----------



## oxrufiioxo (Dec 30, 2019)

WHOFOUNDFUNGUS said:


> Good point. I've been wondering about gen 5 myself. Lots of whispers on the wind and it may well be that we'll be seeing it in a couple of years in the end user world instead of waiting a decade for it like we did with gen 3 for gen 4.



Spec was finalized May 2018 I believe so we could see it in 2021/22 right when PCIE gen 4 might matter.


----------



## WHOFOUNDFUNGUS (Dec 30, 2019)

Oh, the irony!

Perhaps this will be Intel's clue to pull up their socks?


----------



## tabascosauz (Dec 31, 2019)

WHOFOUNDFUNGUS said:


> Buildzoid thinks so. I'm inclined to disagree. Although it is true that there are no end user GPUs requiring gen 4 at the domestic level this doesn't mean that the system cannot benefit over all from PCIe Gen 4. Bandwidth is bandwidth after all. If the system benefits from having more bandwidth even the graphics card will work better than a system that is starved for bandwidth. In fact, I believe someone in Germany recently proved this to be the case. Anyway, that's my view. What do you think?



That's not even the reason why B450 is the recommended choice for most users. B550 isn't released, so you can't use the CPU's 4.0 lanes unless you buy into X570, which is, above all things, hecking expensive because the chipset has a high price tag and nearly all the boards have premium power delivery. That's why B450 is the platform of choice.

I don't need "some guy" in Germany to tell me that bandwidth on any interconnect is important...when the components can make use of it. It wasn't too long ago that we were through all of this with Sandy and Ivy being on the same LGA1155 but one of them having PCIe 3.0. At the end of the day, Sandy owners are upgrading because of a lack of platform features and the silicon itself just being dated. Every time we go through a new revision of PCIe, even a number of GPU generations into the new revision, the performance gain is barely negligible. Yes, GPU manufacturers are going to eventually leverage the new PCIe revision to create faster GPUs, but that's just the onward march of progress, and a different story altogether. The RX 5700s aren't that "new GPU" for PCIe 4.0, they're a 3.0 card with a 4.0 sticker, just like everything else on the market right now. There's no way around that, people don't pull GPU gains out of their asses just in time for a new PCIe revision.

Same story with the early Bristol Ridge APUs on AM4. All of a sudden, they were given DDR4 and all that extra bandwidth......and I don't think anyone expected them to be a sudden sleeper hit, or perform significantly differently to how they would have fared on DDR3. But now, on that same socket, Raven Ridge and Picasso are a very different (and impressive) story; they're made for the times, unlike Excavator-based Bristol Ridge. Hardware needs time to catch up.

Now, the RX5500 is a different story. It took a special kind of big brain on AMD engineers' part to create a card that requires the use of a motherboard way light years beyond its intended market segment to avoid appreciable performance loss due to a half-width PCIe bus.


----------



## oxrufiioxo (Dec 31, 2019)

WHOFOUNDFUNGUS said:


> Oh, the irony!
> 
> Perhaps this will be Intel's clue to pull up their socks?



My guess is Intel skips gen4 and goes straight to gen5 post gen3.


----------



## WHOFOUNDFUNGUS (Dec 31, 2019)

oxrufiioxo said:


> My guess is Intel skips gen4 and goes straight to gen5 post gen3.



That was my thought too.


----------



## Assimilator (Dec 31, 2019)

PCIe 4.0 will help if you're bandwidth-starved, yes... but there is literally no device that requires more bandwidth than PCIe 3.0 can provide (except for PCIe 4.0 SSDs of course). The only product that's an exception to this rule is the 4GB RX 5500 and that card is crippled by both having not enough memory and a PCIe x8 link instead of x16, because AMD had an aneurysm when they designed it. TPU's own tests of GTX 2080 Ti, the highest-end card of this generation, show that it loses mere single-digit percentage points in a 3.0 x8 vs 3.0 x16 configuration.

This does mean that if the GTX 2080 Ti was PCIe 4.0 compatible, you'd be able to use it without any performance penalty in any system that only provides 16 lanes for the GPU in total and splits them x8/x8 when 2 cards are installed. But people who can afford multiple GTX 2080 Tis can also afford motherboards that offer 16 lanes to each GPU, so it's a moot point.

I believe AMD is pushing PCIe 4.0 for the simple reason that their GPUs are simply not as efficient as NVIDIA's at utilising the PCIe bus. So the question to ask really isn't "is the need for PCIe 4.0 for GPUs still several years away?", it's "is *AMD's* need for PCIe 4.0 for GPUs still several years away?" - and I believe the answer to that is "no".

It doesn't really matter though - at this point I'd be shocked if the next-gen GPUs didn't come with PCIe 4.0 capabilities, at which point Intel would be the only company not on the PCIe 4.0 wagon. And while I doubt that being off that wagon will actually hurt performance on their systems, there will be a marketing impact. About the only question is whether they'll be shamed into shoehorning PCIe 4.0 into the current Skylake derivative, or whether they'll stick it out for PCIe 5.0.


----------



## WHOFOUNDFUNGUS (Dec 31, 2019)

Assimilator said:


> It doesn't really matter though - at this point I'd be shocked if the next-gen GPUs didn't come with PCIe 4.0 capabilities, at which point Intel would be the only company not on the PCIe 4.0 wagon. And while I doubt that being off that wagon will actually hurt performance on their systems, there will be a marketing impact. About the only question is whether they'll be shamed into shoehorning PCIe 4.0 into the current Skylake derivative, or whether they'll stick it out for PCIe 5.0.



I agree. Given Intel's track record I'm inclined to think they'll stick it out. They have bigger fish to fry IMO, and they need to come up with something profound if they intend to get back in the race with AMD. I'm both an NVIDIA and AMD GFX user. I would'nt bash AMD for their GPUs but there is no disputing that they still have a ways to go if they plan to give NVIDIA any real competition.

sorry for the typo


----------



## ToxicTaZ (Dec 31, 2019)

From what herd... 

Intel Meteor Lake is on PCIe 5.0 says 2022....most likely @CES 2023 preview at best. 

Intel upcoming LGA 1200 is PCIe 4.0 ready means future Rocket Lake CPUs has a new memory controller on them. Intel did this before with the PCIe 3.0 Z77 chipset....people had 2700K on Z77 but only had PCIe 2.0 ..to get the first PCIe 3.0 had to have 3770K on Z77. 

The biggest benefit right now from PCIe 4.0 is having an M.2 drives......video card aren't showing real benefits at the moment. 

Just if anyone is wondering PCIe 6.0 is almost done. Maybe AMD will leap frog Intel into PCIe 6.0 with Zen 5 AM5? 



			https://pcisig.com/pci-express®-60-specification-track-revision-03-complete


----------



## notb (Dec 31, 2019)

Assimilator said:


> It doesn't really matter though - at this point I'd be shocked if the next-gen GPUs didn't come with PCIe 4.0 capabilities, at which point Intel would be the only company not on the PCIe 4.0 wagon. And while I doubt that being off that wagon will actually hurt performance on their systems, there will be a marketing impact. About the only question is whether they'll be shamed into shoehorning PCIe 4.0 into the current Skylake derivative, or whether they'll stick it out for PCIe 5.0.


PCIe 4.0 has been confirmed for both Intel's mobile and server platforms (i.e. what was going to get new architectures and 10nm in 2020).
Obviously, this will be a short-lived (intermediate) standard as PCIe 5.0 is around the corner.

As for desktops: Intel hoped this whole segment can be hibernated for another 2 years at least.
It seems that there may be some shift in the strategy, so who knows?
That said, this could mean desktop stuff based on either high-power mobile tech (small dies, up to 8 cores; 65W tops) or low-end Xeons (who knows what socket...).

I wouldn't have high hopes (none, to be honest) for the segment that most people on this forum focus on, i.e. high-power gaming / workstation.
It doesn't look like Intel is interested in this market anymore. Much like AMD, they're listening to their clients (they just have different clients ).
These CPUs will be made out of mobile/server leftovers - not unlike what has been going on for the last few years.


----------



## HTC (Dec 31, 2019)

With PCIe 5.0, the bandwidth will double again and, assuming it's still backwards compatible to older standards up to PCIe 1.0, then you could have the equivalent bandwidth of a PCIe 3.0 x16 card using only 4 PCIe 5.0 lanes: ofc this would only work if both board AND GPU were PCIe 5.0 compliant.

That PCIe 4.0's bandwidth isn't fully used today doesn't mean it wont be used tomorrow. We can already see cases where there's significant performance increase by using PCIe 4.0 compliant devices, mainly with NVMEs.


----------



## arbiter (Dec 31, 2019)

Do you need pcie gen4, for graphic's short and easy answer is No. If you look at scaling test's done on here, USING a gtx 2080ti the difference between a 16x pcie3 and 8x pcie3 which if you compare them to pcie4 is 8x and 4x. The card only loses an Avg of 2-3% of performance. if you drop down 1 more step of pcie3 4x it goes from 6-9%. Its good to get it yes for future proofing your build for say 5-6 years down the road but likely won't be first thing that cripples the system vs things like cpu getting old or GPU lacking vram which current claims were main reason for performance loss.









						NVIDIA GeForce RTX 2080 Ti PCI-Express Scaling
					

It takes a lot of bandwidth to support the fastest graphics card, especially one that can play anything at 4K 60 Hz, with an eye on 120 Hz. The GeForce RTX 2080 Ti could be the most bandwidth-heavy non-storage PCIe device ever built. PCI-Express gen 3.0 is facing its design limits.




					www.techpowerup.com
				






Assimilator said:


> The only product that's an exception to this rule is the 4GB RX 5500 and that card is crippled by both having not enough memory and a PCIe x8 link instead of x16, because AMD had an aneurysm when they designed it.


Its not AMD so much fact that reviewer testing the card ran games at settings that use more ram then card has. It would be like a game that wants 8gb system memory and you only have 4 installed. Game will still run but it will run like crap. As much has i have been critical of AMD's many faults, this one isn't a warranted one. If pci was that much of a flaw then card would show same issues using pcie3 on 8gb card which it doesn't so if you cheap out and by 4gb card you will have to back down graphic settings so not to max out the 4gb memory.



Assimilator said:


> I believe AMD is pushing PCIe 4.0 for the simple reason that their GPUs are simply not as efficient as NVIDIA's at utilising the PCIe bus. So the question to ask really isn't "is the need for PCIe 4.0 for GPUs still several years away?", it's "is *AMD's* need for PCIe 4.0 for GPUs still several years away?" - and I believe the answer to that is "no".


Pretty sure nvidia cards would show same issues if you did same test on them. Its just a simple case of asking to use more gpu vram then the card has.


----------



## jaggerwild (Dec 31, 2019)

Intel owns server plat forums or did, so Intel isn't hurting Yet. They had the lead a long long time, with ogglez of money to spend. Intel is also the second highest selling GPU (lolz!) go figure, they don't even make a GPU(yet). a lil home work goes a long way...........


----------



## HTC (Dec 31, 2019)

arbiter said:


> *Do you need pcie gen4, for graphic's short and easy answer is No.* If you look at scaling test's done on here, USING a gtx 2080ti the difference between a 16x pcie3 and 8x pcie3 which if you compare them to pcie4 is 8x and 4x. The card only loses an Avg of 2-3% of performance. if you drop down 1 more step of pcie3 4x it goes from 6-9%. Its good to get it yes for future proofing your build for say 5-6 years down the road but likely won't be first thing that cripples the system vs things like cpu getting old or GPU lacking vram which current claims were main reason for performance loss.
> 
> 
> 
> ...



Not necessarily.

The cards tested have enough memory onboard not to be significantly affected by the PCIe's different bandwidths tested. But what happens when you test those same cards with ... say ... half the VRAM they usually come with? Forcing the card to communicate more often via PCIe: would you get "another 5500XT 4GB VRAM in a PCIe 3.0 board" scenario?

I honestly don't know, but i don't dismiss it outright.


----------



## notb (Dec 31, 2019)

HTC said:


> That PCIe 4.0's bandwidth isn't fully used today doesn't mean it wont be used tomorrow. We can already see cases where there's significant performance increase by using PCIe 4.0 compliant devices, mainly with NVMEs.


Of course at some point we'll get to disks or Internet or accessories so fast that PCIe 4.0 potential could be used. This has absolutely no implication on whether PCIe 4.0 will become widely used.

PCIe 4.0 didn't get any serious traction with POWER9 (PCIe 4.0 supported since 2017) nor EPYC Rome (servers offered with PCIe 3.0 SSDs).
The consumer implementation in Zen2 is unsuitable for laptops (hot chipset).
This means the only existing use case are enthusiast-aimed desktops with a few SSDs available (and Corsair leading the pack).

The most likely scenario is that PCIe 5.0 will arrive before 4.0 gets any serious market adoption.
And of course PCIe 5.0 is backed by a lot of serious players via the CXL consortium.


----------



## arbiter (Dec 31, 2019)

HTC said:


> Not necessarily.
> 
> The cards tested have enough memory onboard not to be significantly affected by the PCIe's different bandwidths tested. But what happens when you test those same cards with ... say ... half the VRAM they usually come with? Forcing the card to communicate more often via PCIe: would you get "another 5500XT 4GB VRAM in a PCIe 3.0 board" scenario?
> 
> I honestly don't know, but i don't dismiss it outright.


Part of thing in the tests was they run every game at ultra preset. Its all well and good for bench marks but when you can see problem of running outta vram well its time to back settings down. Even with pcie 4.0 enabled for card it still much slower then its 8gb counter parts. You don't need 4.0 even with the highest of end cards which has been proven time and time again when vram isn't crippling the card. Lets be real if you are going to use budget card running games on ultra isn't what you are going to do as you know likely card won't be up to task.


----------



## cucker tarlson (Dec 31, 2019)

Assimilator said:


> TPU's own tests of GTX 2080 Ti, the highest-end card of this generation, show that it loses mere single-digit percentage points in a 3.0 x8 vs 3.0 x16 configuration.



dammit people,read the review not just the averages.it's as much as 10-12% at times.


----------



## notb (Dec 31, 2019)

cucker tarlson said:


> dammit people,read the review not just the averages.it's as much as 10-12% at times.


Or as little as 0.5%. I'm sure everyone has an idea of how averaging works.


----------



## R0H1T (Dec 31, 2019)

HTC said:


> With PCIe 5.0, the bandwidth will double again and,* assuming it's still backwards compatible* to older standards up to PCIe 1.0, then you could have the equivalent bandwidth of a PCIe 3.0 x16 card using only 4 PCIe 5.0 lanes: ofc this would only work if both board AND GPU were PCIe 5.0 compliant.
> 
> *That PCIe 4.0's bandwidth isn't fully used today doesn't mean it wont be used tomorrow*. We can already see cases where there's significant performance increase by using PCIe 4.0 compliant devices, mainly with NVMEs.


It is.

Depends on what you're using it for, as a matter of fact in enterprise or HPC space this matters so much more & that space is bandwidth starved so they'll take all you've got & then some!


----------



## cucker tarlson (Dec 31, 2019)

notb said:


> Or as little as 0.5%. I'm sure everyone has an idea of how averaging works.


if this can go up to 10-12% then results around 0.5% are not where they tested the bottleneck on 8x and found none but ones that never made 8x choke.


----------



## HTC (Dec 31, 2019)

arbiter said:


> Part of thing in the tests was they run every game at ultra preset. Its all well and good for bench marks but when you can see problem of running outta vram well its time to back settings down. Even with pcie 4.0 enabled for card it still much slower then its 8gb counter parts. You don't need 4.0 even with the highest of end cards which has been proven time and time again *when vram isn't crippling the card*. Lets be real if you are going to use budget card running games on ultra isn't what you are going to do as you know likely card won't be up to task.



That right there is the issue, as demonstrated by 5500XT *4GB* cards running on PCIe *3.0* boards, using more VRAM intensive games.

Try using a high end card with ... say ... 33% or more of it's VRAM artificially disabled (assuming it's even possible), thus forcing it to resort to PCIe communication much more often, and test PCIe scalling: if i'm right, a card that showed 2% to 3% performance loss testing with PCIe 3.0 VS 2.0 should now see a much more significant performance loss.


----------



## notb (Dec 31, 2019)

R0H1T said:


> Depends on what you're using it for, as a matter of fact in enterprise or HPC space this matters so much more & that space is bandwidth starved so they'll take all you've got & then some!


Mythomania.
Enterprise (and HPC in particular!) are not limited by drive speeds.

You should really look at existing Zen2 servers on offer. How many of them are built around PCIe4.0 drives?


----------



## R0H1T (Dec 31, 2019)

Who said anything about drive speeds? There's tons of lanes which can be used for things other than storage, including but not limited to workload specific accelerators.
For instance ~ *Xilinx Announces World Largest FPGA: Virtex Ultrascale+ VU19P with 9m Cells*


----------



## IceShroom (Dec 31, 2019)

Looks like people are against PCI-e 4.0 cause their favourite company don't have PCI-e 4.0.
Nvidia card dont need PCI-e 4.0 cause that is a Nvidia card.
And enterprise and HPC do need fast PCI-e.


----------



## Assimilator (Dec 31, 2019)

Can we please not muddy this conversation with enterprise/server grade? It's a given that these markets will always want the maximum amount of bandwidth yesterday, and that's understandable because they do actually need it. But based on OP's comments and video they were talking about consumer, so can we keep on that topic plzkthx?



ToxicTaZ said:


> Just if anyone is wondering PCIe 6.0 is almost done. Maybe AMD will leap frog Intel into PCIe 6.0 with Zen 5 AM5?
> 
> 
> 
> https://pcisig.com/pci-express®-60-specification-track-revision-03-complete



If AMD manages to do a new Zen uArch every year then Zen 5 will be 2023, if PCI-SIG's projections are correct and 6.0 is done by 2021 then that would definitely be enough time to have it in Zen 5 (hell, possibly even in Zen 4). But we've already seen with X570 that PCIe 4.0 adds complexity and therefore cost over 3.0, so I'd be surprised if the consumer market gets anything more than 4.0 for about as long as we've had 3.0.


----------



## EarthDog (Dec 31, 2019)

WHOFOUNDFUNGUS said:


> Perhaps this will be Intel's clue to pull up their socks?


Intel will pull up their socks when the need to. Pcie 4.0 isnt needed now unless you are amd and for some reason put a 4gb card with x8 electrical connection. 


arbiter said:


> Part of thing in the tests was they run every game at ultra preset. Its all well and good for bench marks but when you can see problem of running outta vram well its time to back settings down. Even with pcie 4.0 enabled for card it still much slower then its 8gb counter parts. You don't need 4.0 even with the highest of end cards which has been proven time and time again when vram isn't crippling the card. Lets be real if you are going to use budget card running games on ultra isn't what you are going to do as you know likely card won't be up to task.


Because we bought PCs to game at console settings? I dont get that mentality of shooting for anything less. And note, in most of these titles where there is a massive performance hit,  the card is able to run 60 fps at ultra anyway. 



IceShroom said:


> Looks like people are against PCI-e 4.0 cause their favourite company don't have PCI-e 4.0.
> Nvidia card dont need PCI-e 4.0 cause that is a Nvidia card.
> And enterprise and HPC do need fast PCI-e.


lol, riiiiiiiight.

If amd made their budget card right, it wouldnt need 4.0 either.

While you are right about HPC and enterprise able to use it for storage purposes (among other reasons), this site is for home users. So, it really isnt needed for consumers. It's not that people are against it because their preferred brand (lol) doesnt have it.


----------



## arbiter (Dec 31, 2019)

cucker tarlson said:


> dammit people,read the review not just the averages.it's as much as 10-12% at times.


My counter to you is We use 2080ti as example that 8x pcie3.0 doesn't cripple a 2080ti in way that 5500xt crippleing happens. The cripple is and always is fact you are asking for more vram then card has. Now that 10% drop in games could have with more to do with fact the 2080ti is like 2-2.5x faster then 5500xt. If you look at reviews here on TPU a 5500xt avg's ~73fps over the test's at 1080p, a reference 2080ti does 174fps. So bandwidth constraints will likely effect a 2080ti far more then a 5500xt if they were to face it.








						Gigabyte Radeon RX 5500 XT Gaming OC 8 GB Review
					

Gigabyte's RX 5500 XT Gaming OC comes with 8 GB VRAM to ensure there's plenty of memory for the latest games. The cooler has three extremely quiet fans, and fan stop is included, too. With manual overclocking, we broke the 2 GHz barrier, more than on any other RX 5500 XT so far.




					www.techpowerup.com
				





HTC said:


> That right there is the issue, as demonstrated by 5500XT *4GB* cards running on PCIe *3.0* boards, using more VRAM intensive games.
> Try using a high end card with ... say ... 33% or more of it's VRAM artificially disabled (assuming it's even possible), thus forcing it to resort to PCIe communication much more often, and test PCIe scalling: if i'm right, a card that showed 2% to 3% performance loss testing with PCIe 3.0 VS 2.0 should now see a much more significant performance loss.


If you took a RTX2080 and removed half its memory pretty sure it would suffer massive fps loss like that test. Its fact card lacked memory for what game was asking for.


EarthDog said:


> Because we bought PCs to game at console settings? I dont get that mentality of shooting for anything less. And note, in most of these titles where there is a massive performance hit,  the card is able to run 60 fps at ultra anyway.


Well welcome to world of budget computer gaming. When you buy things on a budget you kinda expect to not run games at ultra settings. Anyone that does expect that likely don't understand value of a dollar. It would be like buying a little ford rangers back in the day and sticking a snow plow blade on it and expecting to push same snow on a daily basis as an f250.


----------



## ToxicTaZ (Jan 1, 2020)

Intel slow move to PCIe 4.0

Intel Xe is PCIe 4.0
Intel 400 series is PCIe 4.0 Ready

Intel 10th generation is not, PCIe 3.0 only and short lived like 7th gen.

So Intel Rocket Lake 11th gen (Made by Samsung) will have the PCIe 4.0 upgrade Q4 2020

Intel Alder Lake Q4/2021

So confusing
Intel is in a PCIe mess

Nvidia 3000 series should have PCIe 4.0

AMD has it done right.


----------



## TheoneandonlyMrK (Jan 1, 2020)

EarthDog said:


> Intel will pull up their socks when the need to. Pcie 4.0 isnt needed now unless you are amd and for some reason put a 4gb card with x8 electrical connection.
> Because we bought PCs to game at console settings? I dont get that mentality of shooting for anything less. And note, in most of these titles where there is a massive performance hit,  the card is able to run 60 fps at ultra anyway.
> 
> lol, riiiiiiiight.
> ...


do you not think the next consoles will push the envelope?

with all the talk of no load times i cant help but think it wil take memory pools and internal bandwidth to make that happen, yes it is not per say the gpu that needs bandwidth but its possible it could make pcs a bit , loady.?


----------



## WHOFOUNDFUNGUS (Jan 1, 2020)

R0H1T said:


> Who said anything about drive speeds? There's tons of lanes which can be used for things other than storage, including but not limited to workload specific accelerators.
> For instance ~ *Xilinx Announces World Largest FPGA: Virtex Ultrascale+ VU19P with 9m Cells*



Yeah, but who wants those common old domestic end user chips? 



ToxicTaZ said:


> Intel slow move to PCIe 4.0
> 
> Intel Xe is PCIe 4.0
> Intel 400 series is PCIe 4.0 Ready
> ...



I think AMD has done it right too, but on the short term. I wonder if they're going to be wise about it and keep handing the benefit to the consumer however. Already they seem to be getting pulled into the greed curve. They may just get a little too big for their britches and do the same flash in the pan gig they did before. If that should prove to be the case it won't be long before AMD is eating humble pie again. They would do well to learn from their past mistakes and keep a competitive price margin running because it doesn't take much for a company with 100 times the resources (a company like Intel) to blast them out of the game should the sleeping giant awaken. As it currently stands AMD is enjoying the short term benefits of PCEI Gen 4 and giving domestic users features Intel has chosen not to offer. I say "chosen" because there is no way in the world a company like Intel literally "missed the boat" merely as a result of market oversight. Something's up. Something huge. The word "modular" keeps popping into my mind and I'm wondering if Intel isn't preparing to run a game changer that will completely revolutionize the PC architecture in a way that would make conventional PCs look absolutely archaic. They could be just playing along while their "deep state" team develops something we've never even seen the likes of.

Maybe it's my work station mentality that has given me some bias here, but it is still my opinion that in freeing up bandwidth even a GPU that normally runs on PCIe Gen 3 could benefit some in a PCIe Gen 4 platform. I'm probably too conditioned to cards competing for lanes and this is likely not much of a concern for pure gamers I suppose. I mean, why ever would a gamer want extra bandwidth for things like streaming, more storage drives, card readers, and devices as retro as DVD/CD ROMS? I confess, I'm old school and I have little doubt it won't be long before PCIe Gen 4 will be considered a thing of the past. Just look at what happened to Thunderbolt.


----------



## king of swag187 (Jan 1, 2020)

5500XT benefits a fair bit with 4.0


----------



## WHOFOUNDFUNGUS (Jan 1, 2020)

I suppose the argument is that_ if manufacturers made their PCIe components "right" there wouldn't be any need for PCIe Gen 4._ I would say that this is beside the point and a rather opinionated claim regardless. I mean, I could say that if manufacturers made system boards "right" there wouldn't be any need for PCIe at all. It's a moot statement. Seeing that PCIe is still going to be with us into this decade, my guess is that even the lowly end users like myself will likely benefit from PCIe Gen 4. As for _*need*_, I'm not so sure *Buildzoid* (or anyone else for that matter) is so authorized to determine such a thing as this sort of  terminology is somewhat ambiguous considering what it generally implies. Conversely, I suppose that it can be said that machines have needs in a loose sense. Do graphics cards currently "need" PCIe Gen 4? Perhaps some would benefit marginally. I did include Buildzoid's video as he appears to confuse the needs of a device with his own, personal needs and/or preferences. He doesn't need PCIe Gen 4 and that's fine. I'm sure most of us PC users could get by with PCIe Gen 3 and that's fine also. Evidently PCIe Gen 4 isn't about needs. I'm guessing it is more about convenience and benefit. I'm having a hard time swallowing the suggestion that double bandwidth is not a benefit and I suspect the benefit from having it will free up a whole lot more than just graphics cards. 


By all means, *IceShroom*, feel free to "muddy" this thread with enterprise/server grade chatter. Even some of us end user domestics like to build home servers. As for me, I'll be running my lowly little PCIe Gen 3 rig on the home server end for many years to come  — even if it means locking it up in a basement closet when it gets too noisy. I hope to get started on my brand new AMD gaming rig this month and I trust I shall soon see if my graphics card benefits from the added advantage of PCIe Gen 4. I'm certain my storage will


----------



## IceShroom (Jan 1, 2020)

WHOFOUNDFUNGUS said:


> I suppose the argument is that_ if manufacturers made their PCIe components "right" there wouldn't be any need for PCIe Gen 4._ I would say that this is beside the point and a rather opinionated claim regardless. I mean, I could say that if manufacturers made system boards "right" there wouldn't be any need for PCIe at all. It's a moot statement. Seeing that PCIe is still going to be with us into this decade, my guess is that even the lowly end users like myself will likely benefit from PCIe Gen 4. As for _*need*_, I'm not so sure *Buildzoid* (or anyone else for that matter) is so authorized to determine such a thing as this sort of  terminology is somewhat ambiguous considering what it generally implies. Conversely, I suppose that it can be said that machines have needs in a loose sense. Do graphics cards currently "need" PCIe Gen 4? Perhaps some would benefit marginally. I did include Buildzoid's video as he appears to confuse the needs of a device with his own, personal needs and/or preferences. He doesn't need PCIe Gen 4 and that's fine. I'm sure most of us PC users could get by with PCIe Gen 3 and that's fine also. Evidently PCIe Gen 4 isn't about needs. I'm guessing it is more about convenience and benefit. I'm having a hard time swallowing the suggestion that double bandwidth is not a benefit and I suspect the benefit from having it will free up a whole lot more than just graphics cards.
> 
> 
> By all means, *IceShroom*, feel free to "muddy" this thread with enterprise/server grade chatter. Even some of us end user domestics like to build home servers. As for me, I'll be running my lowly little PCIe Gen 3 rig on the home server end for many years to come — even if it means locking it up in a basement closet when it gets too noisy. I hope to get started on my brand new AMD gaming rig this month and I trust I shall soon see if my graphics card benefits from the added advantage of PCIe Gen 4. I'm certain my storage will


I am not mudding the thread, you guys are mudding AMD's PCI-e 4.0 launch from the 1st day it was announced. You guys speared not a second to critisize X370/X470 s PCI-e 2.0. So AMD gave you PCI-e 4.0, but you guys still critsizing it?? 
AND What did PCI-e 4.0 done to you guys?? Why are you guys so against it?


----------



## WHOFOUNDFUNGUS (Jan 1, 2020)

Myself, I think there is some misunderstanding about my personal position on the matter.* If* I didn't like the idea of PCI-e 4.0 I don't think I would have just purchased a system board that uses it. On the contrary – I am all for it. Furthermore, what AMD has done over these past couple of years is phenomenal, in my opinion, and I hope they keep up the good work. Admittedly, I can't speak for the Assimilator but that doesn't mean I can't quote him on a few things. Why do you think I criticized X370 or X470 platforms? The only thing about the X570 that had me concerned (and still does to an extent) is the whole idea of onboard fans over the chip set. Even this did not prevent me from buying an X570 system board, once I discovered one that was user friendly enough for me to change out the fan in the event of a failure. What gives you the impression that I am against PCI-e Gen 4?


----------



## Deleted member 171912 (Jan 1, 2020)

Yes, still not needed for consumer graphics cards.

But new gen PCIe is important for servers. New PCIe 5.0 controllers will be implemented soon in chipset and SoC and used in storage, 400GbE networking, AI, HPC, streaming in datacenters/clouds. Therefore AMD started with PCIe 4.0 last year and prepares new gen cards, NVIDIA acquired Mellanox and prepares new gen cards, Intel prepares too and started with first gen GPGPU. It will be big business in near future.

And for example, IBM POWER9 has PCIe 4.0 I/O support 2 years already.


----------



## WHOFOUNDFUNGUS (Jan 1, 2020)

Yes, that's right. I've often wondered why they made domestic end users wait so long but at least we can confidently say that Gen 4 has been tried and tested.


----------



## EarthDog (Jan 1, 2020)

theoneandonlymrk said:


> do you not think the next consoles will push the envelope?
> 
> with all the talk of no load times i cant help but think it wil take memory pools and internal bandwidth to make that happen, yes it is not per say the gpu that needs bandwidth but its possible it could make pcs a bit , loady.?


I'm not holding my breath...

Pcie 4.0 is pretty useless for most users...especially in the GPU realm.


----------



## WHOFOUNDFUNGUS (Jan 1, 2020)

EarthDog said:


> I'm not holding my breath...
> 
> Pcie 4.0 is pretty useless for most users...especially in the GPU realm.



I won't argue with that. Most users would likely care less about it. My point of contention might be more of a matter of semantics. Buildzoid was using the word "need" rather liberally, I thought. Perhaps he might have chosen a better choice of word but again, it still points to semantics. Does a PC actually "need" anything? Does a PC actually "benefit" from a component or, rather, is it the *end user* who benefits from using the component? I thought about what Buildzoid was saying. Most PCs aren't implementing a whole lot in the way of *AI *so how does the PC have any needs or benefits? Perhaps if one were to look at the subject in the light of marketing it might seem more rational: Does the graphics card market _need _PCIe 4.0? I would still venture to say there is some benefit here as well. The future proofing end of it is a no-brainier, but even in the immediate here and now I am of the persuasion that the market and the end user will benefit. 

I suppose I would do well to control my compulsion to add 100 terabytes of storage to my gaming rig but I am one who likes my options. I realize most GPUs won't even cap out on a 16x PCIe express lane BUT... I have seen conflicts between cards many times in my PC building days and it has been my experience that a certain general rule of the thumb tends to prevail with respect to PCIE. I am referring to the System itself. If the System falters the whole sh'bang is compromised, the GPU and the works. A System can only be as strong as the weakest link in the chain. So I suggest again, that an end user could benefit from running a current graphics card (designed for use in a PCIe 3.0 system board) in a PCIe 4.0 platform because in the end it all works together


----------



## EarthDog (Jan 2, 2020)

WHOFOUNDFUNGUS said:


> I won't argue with that. Most users would likely care less about it. My point of contention might be more of a matter of semantics. Buildzoid was using the word "need" rather liberally, I thought. Perhaps he might have chosen a better choice of word but again, it still points to semantics. Does a PC actually "need" anything? Does a PC actually "benefit" from a component or, rather, is it the *end user* who benefits from using the component? I thought about what Buildzoid was saying. Most PCs aren't implementing a whole lot in the way of *AI *so how does the PC have any needs or benefits? Perhaps if one were to look at the subject in the light of marketing it might seem more rational: Does the graphics card market _need _PCIe 4.0? I would still venture to say there is some benefit here as well. The future proofing end of it is a no-brainier, but even in the immediate here and now I am of the persuasion that the market and the end user will benefit.
> 
> I suppose I would do well to control my compulsion to add 100 terabytes of storage to my gaming rig but I am one who likes my options. I realize most GPUs won't even cap out on a 16x PCIe express lane BUT... I have seen conflicts between cards many times in my PC building days and it has been my experience that a certain general rule of the thumb tends to prevail with respect to PCIE. I am referring to the System itself. If the System falters the whole sh'bang is compromised, the GPU and the works. A System can only be as strong as the weakest link in the chain. So I suggest again, that an end user could benefit from running a current graphics card (designed for use in a PCIe 3.0 system board) in a PCIe 4.0 platform because in the end it all works together


O.......K...Not really sure what any of that actually means .

If there isnt a performance benefit (or some kind of benefit) what's the point? You say they can benefit, but there is literally no reason to get it now unless you bought a 4gb 5500xt or have a lot of m.2 and sata drives. If a user has that situation, it can be beneficial, otherwise not so much. It IS marketing making some believe there is value in it for the masses. Right now there isnt.


----------



## WHOFOUNDFUNGUS (Jan 2, 2020)

I have a lot of M.2 and Sata drives. I also have a lot of cards. I'm thinking about going into data streaming at some point. I'd also like to use a capture card after I move into my new home. If extra bandwidth can help any of those without robbing what my graphics card would use then that would be a benefit, I would think. Also, having the freedom to place a graphics card in any PCIE slot on the system board could be a handy thing. Perhaps system boards will all feature same size slots in the future (like my work station does) as there will be enough bandwidth to cover seven 16X slots if need be. Wouldn't that be nice to have as a standard feature?


----------



## EarthDog (Jan 2, 2020)

WHOFOUNDFUNGUS said:


> Also, having the freedom to place a graphics card in any PCIE slot on the system board could be a handy thing. Perhaps system boards will all feature same size slots in the future (like my work station does) as there will be enough bandwidth to cover seven 16X slots if need be. Wouldn't that be nice to have as a standard feature?


I think you are reaching, personally. 

You're also on the HEDT platform, which, when you upgrade again I'll assume you will stay with. Plenty of pcie lanes available there for most setups... 4.0 not needed.


----------



## WHOFOUNDFUNGUS (Jan 2, 2020)

I'm absolutely reaching. I want more information. I'm building a PCIE GEN 4 rig, my very first. Why would I not be reaching?


----------



## EarthDog (Jan 2, 2020)

WHOFOUNDFUNGUS said:


> I'm absolutely reaching. I want more information. I'm building a PCIE GEN 4 rig, my very first. Why would I not be reaching?


All I am saying is that the reasons you listed for 4.0 being a benefit doesn't actually benefit a lot of people (and reaching to make it 'worth it' and defend the point).

EDIT: I get that technology needs to be out there for it to be utilized, but just saying it really doesn't, and won't affect a lot of people now, and likely for some time.


----------



## WHOFOUNDFUNGUS (Jan 2, 2020)

EarthDog said:


> All I am saying is that the reasons you listed for 4.0 being a benefit doesn't actually benefit a lot of people (and reaching to make it 'worth it' and defend the point).



That's why I stated I wouldn't argue that point. Most people won't need that much bandwidth but as I already stated this isn't necessarily about needs. For me, it's about wants; but I concede in the knowledge that if I_ want _that much bandwidth I may benefit from using a PCIE Gen 4 system board. Furthermore it has already been established that in some cases even a graphics card designed for use on PCIe Gen 3 can indeed reap benefits on a PCIe Gen 4 build. I'm not making any rules here, I'm merely making an observation but if there is a rule to be made I would surmise that the exception has already proved it: PCIe Gen 4 provides benefits for graphics cards already in existence. I suspect we'll be seeing even more of this as time goes by.


----------



## EarthDog (Jan 2, 2020)

I do not reap any benefits from wanting alone. lol

PCIe 4.0 improves ONE videocard because of what many feel is a design flaw and only in VRAM limited situations. If the card was made with proper routing, it would act like all others and not show improvements. A 5700 XT performs the same on PCIe 4.0 x16 as 3.0 x16. Same as a 2080 Ti. Performs the same on PCIe 4.0 x16 as it does 3.0 x16. Margin of error difference overall.








						PCI-Express 4.0 Performance Scaling with Radeon RX 5700 XT
					

PCI-Express 4.0 has been one of the main new features prominently brandished on the product boxes of both the 3rd generation Ryzen desktop processors and Radeon RX 5700 series graphics cards. We examine the performance impact of running these cards on older generations of PCIe.




					www.techpowerup.com
				




Right now, PCIe 4.0 is like DDR5.... I see many people cheering for its arrival, but, its iterative and not groundbreaking.


----------



## WHOFOUNDFUNGUS (Jan 2, 2020)

Indeed. In fact sometimes there are no benefits to be had in obtaining as well. But, for the most part, I think there are benefits to be had with PCIe Gen 4 whether potential or otherwise, and I suspect that they will increase over time. In my case, because I run a network, my gaming PC will be able to serve me well in other aspects of computing as well. I am told that these days gamers are also getting big on streaming. I wouldn't know if this is true personally as I am not much of a hard core gamer. In fact, I've never built myself a gaming rig despite building several of them for others. I'm excited about my new project as I'm going to scratch another one off the bucket list.


----------



## EarthDog (Jan 2, 2020)

There is potential... it's just years down the road. 

If I was buying a PC now, PCIe 4.0 wouldn't be on my list of needs or wants at this point.


----------



## candle_86 (Jan 2, 2020)

Not needed yet, pcie 2 wasn't to slow untill the GTX 1080 to, we are just now starting to use the bandwidth 3 offers and not close to maxing it out, it might be useful for gpgpu computing in server farms but for a normal person no it's not needed.


----------



## WHOFOUNDFUNGUS (Jan 2, 2020)

I'm thinking Intel is gonna spring Gen 5 on us before long. They're going to have to do something soon if they expect to stay in the race.


----------



## hat (Jan 2, 2020)

I wonder if later pci-e versions could lead to cheaper devices? Imagine other cards in the future like the 5500XT that are only wired for x8 electrical because PCI-E has gotten fast enough to allow for that... now, in the case of the 5500XT, the fact that it only had 4GB vRAM is... a downside. Your graphics card and system memory shouldn't have to play fetch with data that's supposed to reside comfortably in vRAM anyway... but for other communications, x8 4.0 is likely fast enough for all but the fastest of cards... and x8 5.0 better still. I wonder if this could even lead to having more PCI-E lanes available overall, as things require less lanes because each lane has gotten so fast on its own...


----------



## HTC (Jan 2, 2020)

hat said:


> I wonder if later pci-e versions could lead to cheaper devices? Imagine other cards in the future like the 5500XT that are only wired for x8 electrical because PCI-E has gotten fast enough to allow for that...



Not just that: they can make less powerful PCIe 4.0 cards with just x4 and still not be bandwidth starved. Think any current PCIe 3.0 cards @ x8 having just this connection (pic below) instead of x16 with only x8 electrical, which is how it's currently done.


----------



## tabascosauz (Jan 2, 2020)

IceShroom said:


> *I am not mudding the thread*, you guys are mudding AMD's PCI-e 4.0 launch from the 1st day it was announced. You guys speared not a second to critisize X370/X470 s PCI-e 2.0. So AMD gave you PCI-e 4.0, but you guys still critsizing it??
> AND What did PCI-e 4.0 done to you guys?? Why are you guys so against it?







1. Promontory is an ASMedia design.
2. The PCIe lanes that come off of the PCH have zero bearing on GPU performance.

3. PCIe 4.0 is marketed as universal to the Ryzen 3000 family. Not all 3000 SKUs are positioned as "premium products".
4. X570 is officially positioned as a "premium" product.
5. PCIe 4.0 is unusable unless you buy X570.

Hey, it was you who brought up HPC and Promontory's PCIe 2.0 x4 lanes...


At the end of the day, most of us buy more hardware than we actually need. Would a i3-9350K be plenty for my needs? Yes. Was the old 1070 sufficient, performance-wise? Yes. Did I want those upgrades anyways? Yes.

But it would serve you well to remind yourself that need does not equate to want. The question concerns the need for PCIe 4.0 for graphics purposes in the consumer hardware market.


----------



## WHOFOUNDFUNGUS (Jan 2, 2020)

I know this is what so many are nattering about so I thought I'd share the video.


----------



## hat (Jan 2, 2020)

tl;dr card runs out of the small 4GB pool of vRAM and the data swapping back and forth from the card to the RAM is bottlenecked by PCI-E 3.0 on an x8 card.


----------



## HTC (Jan 2, 2020)

hat said:


> tl;dr card runs out of the small 4GB pool of vRAM and the data swapping back and forth from the card to the RAM is bottlenecked by PCI-E 3.0 on an x8 card.



Yup: that's pretty much it.

With the 8GB version of the card this doesn't happen as much because of the higher VRAM amount: much less data swapping back and forth, so much less of a bottleneck.


----------



## arbiter (Jan 2, 2020)

HTC said:


> Yup: that's pretty much it.
> 
> With the 8GB version of the card this doesn't happen as much because of the higher VRAM amount: much less data swapping back and forth, so much less of a bottleneck.


Exactly its a vram issue but there is a lot of people that ignore that is the underlying problem and push the idea you need pcie 4.0 that even in that test the 8gb card on 3.0 ran fine. If 3.0 was really the problem then the 8gb card would had same problem on 3.0 but it doesn't.

Things are in an odd place atm. Midish range cards used to be good with 2 and even 4gb memory but they are fast enough to a point that start lookin at them getting 8gb which is same for most mid high end cards that are for 1440p and entry level 4k.  it used to be when 4gb was kinda norm and 8gb was high end you would see low end cards getting slapped with 4gb memory when 2gb was more then enough and 4gb was just a marketing ploy to sell the card at bit of a markup.


----------



## Zach_01 (Jan 2, 2020)

AMD has produced 5500XT with x8 wiring at max. Cost effective for AMD? Probably... Why is this frowned upon? Can’t see why a company would not try to increase profit margins within reason(whose’s reason you may ask). And the 5500 line isn’t really a ground breaking replacement for any card.
So (new)users will probably buy them for new systems and probably go with X570’s PCI-E 4.0 because now it’s the only platform with some upgradability and many many options, CPU wise and other as well...
Yes AMD while cutting cost from 5500 production, also indirectly promotes X570 and PCI-E 4.0 in general. Companies are all about margins and profits and LisaSu does not love gamers. Big news!! I’m falling off the pink cloud...

We like it or not, Intel seems to be in a mess right now. Intel will recover and I hope for the sooner so real competition will start again.


----------



## Aquinus (Jan 2, 2020)

I thought that the 4GB vs 8GB performance thing actually had to do with drivers and that newer drivers fixed that behavior to be more in line with the 8GB variant at PCIe 3.0. I can't however remember what review I saw that in and I haven't had my morning coffee yet.


----------



## WHOFOUNDFUNGUS (Jan 2, 2020)

Zach_01 said:


> AMD has produced 5500XT with x8 wiring at max. Cost effective for AMD? Probably... Why is this frowned upon? Can’t see why a company would not try to increase profit margins within reason(whose’s reason you may ask). And the 5500 line isn’t really a ground breaking replacement for any card.
> So (new)users will probably buy them for new systems and probably go with X570’s PCI-E 4.0 because now it’s the only platform with some upgradability and many many options, CPU wise and other as well...
> Yes AMD while cutting cost from 5500 production, also indirectly promotes X570 and PCI-E 4.0 in general. Companies are all about margins and profits and LisaSu does not love gamers. Big news!! I’m falling off the pink cloud...
> 
> We like it or not, Intel seems to be in a mess right now. Intel will recover and I hope for the sooner so real competition will start again.



Which is why AMD is right on track with their X570 release even _if_ _Intel_ plans on bringing we, the lowly end users, PCIe Gen 5 in the next couple of years. AMD will need all the capital they can get their hands on to pull their next rabbit from the hat and I'm thinking the focus will primarily be on graphics cards. Meanwhile Intel fans don't really seem to care much about the Spectre/Meltdown issues that Intel keeps failing to fix because they keep buying their CPUs regardless. So if the fans don't care I'm pretty sure Intel is going to keep on cranking out what in my estimation are defective chips with serious security vulnerabilities. AMD is providing a number of reasons to run with them instead of team blue but I'm still wondering if Intel isn't waiting for the perfect moment to completely blow away the market with a complete game changer. Only time will tell.

Not that I'm a big SLI or CROSSFIRE fan but...


----------



## Zach_01 (Jan 2, 2020)

WHOFOUNDFUNGUS said:


> Which is why AMD is right on track with their X570 release even _if_ _Intel_ plans on bringing we, the lowly end users, PCIe Gen 5 in the next couple of years. AMD will need all the capital they can get their hands on to pull their next rabbit from the hat and I'm thinking the focus will primarily be on graphics cards. Meanwhile Intel fans don't really seem to care much about the Spectre/Meltdown issues that Intel keeps failing to fix because they keep buying their CPUs regardless. So if the fans don't care I'm pretty sure Intel is going to keep on cranking out what in my estimation are defective chips with serious security vulnerabilities. AMD is providing a number of reasons to run with them instead of team blue but I'm still wondering if Intel isn't waiting for the perfect moment to completely blow away the market with a complete game changer. Only time will tell.
> 
> Not that I'm a big SLI or CROSSFIRE fan but...




Could be the case... We should never underestimate Intel. It is a giant company after all not relaying almost entirely in mainstream market as AMD does right now.

The real thing here is that AMD needs to stay on the edge because Intel will eventually make the comeback, in maybe 2-3 years, and that could be as “strong“ as a sh*tstrorm for AMD who needs now every big or small ROI that can get, from anywhere.


----------



## WHOFOUNDFUNGUS (Jan 2, 2020)

I have a sneaky suspicion this is where Intel may be going next and if I'm right this move could result in a complete paradigm shift in the entire PC industry. Again, it's anybody's guess.


----------



## hat (Jan 2, 2020)

I... don't think so. I mean, there could be a market for it, sure... but it's not going to replace desktop PCs for the same reason laptops will never replace desktop PCs. Even a laptop (nevermind something as small as this) only has so much room for cooling and power delivery. Move up to even something as small as an ITX system and suddenly the options explode. No longer are you limited to a 15w CPU that constantly throttles for one reason or another under any load heavier than a paperclip.


----------



## EarthDog (Jan 2, 2020)

hat said:


> I wonder if later pci-e versions could lead to cheaper devices? Imagine other cards in the future like the 5500XT that are only wired for x8 electrical because PCI-E has gotten fast enough to allow for that... now, in the case of the 5500XT, the fact that it only had 4GB vRAM is... a downside. Your graphics card and system memory shouldn't have to play fetch with data that's supposed to reside comfortably in vRAM anyway... but for other communications, x8 4.0 is likely fast enough for all but the fastest of cards... and x8 5.0 better still. I wonder if this could even lead to having more PCI-E lanes available overall, as things require less lanes because each lane has gotten so fast on its own...


A dew dollars is all that will save, really.

A bit sad they had to do that to save a few dollars and they are still priced too high... 



Zach_01 said:


> AMD has produced 5500XT with x8 wiring at max. Cost effective for AMD? Probably... Why is this frowned upon? Can’t see why a company would not try to increase profit margins within reason(whose’s reason you may ask). And the 5500 line isn’t really a ground breaking replacement for any card.
> So (new)users will probably buy them for new systems and probably go with X570’s PCI-E 4.0 because now it’s the only platform with some upgradability and many many options, CPU wise and other as well...
> Yes AMD while cutting cost from 5500 production, also indirectly promotes X570 and PCI-E 4.0 in general. Companies are all about margins and profits and LisaSu does not love gamers. Big news!! I’m falling off the pink cloud...


It's frowned upon because of the large performance difference between it on 3.0 and 4.0. Clearly, in VRAM limited situations, the x8 wiring is a significant detriment to performance as moving this card on to a 4.0 x16 slot shows marked improvements. I'd rather pay a couple of bucks more for that (or the 8GB card) than to spend the premium on a X570/PCIe 4.0 board if I was buying today.

It is odd they knew about the problem with VRAM (it was in their slides a few titles that really took a hit), but didn't take the opportunity to state works best in their PCIe 4.0 ecosystem. This would likely alienate sales for those with PCIe 3.0 looking not to add a glass ceiling. But I agree, they had a chance to spin this positively, but.... didn't work out that way.


WHOFOUNDFUNGUS said:


> I'm thinking Intel is gonna spring Gen 5 on us before long. They're going to have to do something soon if they expect to stay in the race.





WHOFOUNDFUNGUS said:


> PCIe Gen 5 in the next couple of years.


Sweet jebus... now you are talking about PCIe 5.0 when 4.0 is really pretty useless? 

The PCIe lanes have nothing to do with Intel being competitive in the GPU market or whatever.



Aquinus said:


> I thought that the 4GB vs 8GB performance thing actually had to do with drivers and that newer drivers fixed that behavior to be more in line with the 8GB variant at PCIe 3.0. I can't however remember what review I saw that in and I haven't had my morning coffee yet.


When you do, please link it up. 

19.12.2/3 aren't it and the latter is the latest driver. The .3 driver fixed the debacle of the .2 driver.


----------



## arbiter (Jan 2, 2020)

EarthDog said:


> It's frowned upon because of the large performance difference between it on 3.0 and 4.0. Clearly, in VRAM limited situations, the x8 wiring is a significant detriment to performance as moving this card on to a 4.0 x16 slot shows marked improvements. I'd rather pay a couple of bucks more for that (or the 8GB card) than to spend the premium on a X570/PCIe 4.0 board if I was buying today.
> 
> It is odd they knew about the problem with VRAM (it was in their slides a few titles that really took a hit), but didn't take the opportunity to state works best in their PCIe 4.0 ecosystem. This would likely alienate sales for those with PCIe 3.0 looking not to add a glass ceiling. But I agree, they had a chance to spin this positively, but.... didn't work out that way.


First off card is only wired for 8x so can't do 16x, as for improvment on 4.0 vs 3.0 yes there is but its not like its an isolated thing. Take ANY gpu from any maker including nvidia. You would get same performance issues if it only had 4gb and ran out. Just means if you went with the budget card you will have to lower settings to get vram usage under what card has. I have critized AMD for a lot over the years but this one really isn't one you can pin on them when it would be users asking more of card then what it has to give.


----------



## EarthDog (Jan 2, 2020)

arbiter said:


> First off card is only wired for 8x so can't do 16x,


Did I state otherwise (no......)? If so, I didn't mean to. 



> as for improvment on 4.0 vs 3.0 yes there is but its not like its an isolated thing. Take ANY gpu from any maker including nvidia. You would get same performance issues if it only had 4gb and ran out. Just means if you went with the budget card you will have to lower settings to get vram usage under what card has.


True, but here we are. Also, in the titles where running 1080p ultra eclipses the VRAM limit, performance gets over 60 FPS. So, I wouldn't have to lower settings due to a lack of horsepower as you seem to be alluding to.

The thing is though, Nvidia doesn't currently make a card (Turing for sure, Pascal, I don't think so) that is wired 4.0 x8 electrically that has enough horsepower to run 1080p 60 FPS anyway. So while that is true, we do not have that situation. What we do have is AMD making a card that had more to give if it was wired properly. This is an oversight. Most of the titles where we see this issue the card can run 60 fps with Ultra settings when using PCIe 4.0 x16 bandwidth. How many users know this will happen, and now many users are buying premium X570 motherboards but entry level GPUs? At minimum AMD had a chance to tell us these work better in a 4.0 environment...you can't tell me with a straight face that most of these are going into PCIe 4.0 slots and isn't a big deal. IMO, these are mostly going into existing PCIe 3.0 boards.



> I have critized AMD for a lot over the years but this one really isn't one you can pin on them when it would be users asking more of card then what it has to give.


Sure you can, see above. It has plenty to give. AMD neutered this card when it runs into VRAM limited situations by its choice to use x8 wiring.


----------



## Zach_01 (Jan 2, 2020)

I agree for the pricing of 5500, they should have priced it at least 20$ less. Smaller profit margin/card but alot more cards sell.
And X550 should have been out by now with PCI-E4.0 support, to fill the 100~160$ price space for the latest chipset series.


----------



## EarthDog (Jan 2, 2020)

Zach_01 said:


> And X550 should have been out by now with PCI-E4.0 support, to fill the 100~160$ price space for the latest chipset series.


Surely AMD users looking for a better price on the platform are waiting for that... until then, I would have to imagine this card is being put on 3.0 in an overwhelming majority of situations.


----------



## HTC (Jan 2, 2020)

arbiter said:


> Exactly its a vram issue but there is a lot of people that ignore that is the underlying problem and push the idea you need pcie 4.0 that even in that test the 8gb card on 3.0 ran fine. *If 3.0 was really the problem then the 8gb card would had same problem on 3.0 but it doesn't.*
> 
> Things are in an odd place atm. Midish range cards used to be good with 2 and even 4gb memory but they are fast enough to a point that start lookin at them getting 8gb which is same for most mid high end cards that are for 1440p and entry level 4k.  it used to be when 4gb was kinda norm and 8gb was high end you would see low end cards getting slapped with 4gb memory when 2gb was more then enough and 4gb was just a marketing ploy to sell the card at bit of a markup.



But it does, though nowhere near to the same extent. This can be seen in the original article (in German) that spawned this whole discussion in the 1st place.

But you are partially right: it IS a VRAM issue but it's *NOT ONLY* a VRAM issue.

Up until the original article broke out, all the tests done with PCIe scaling that i saw (mostly here @ TPU) said the same thing: the PCIe 3.0 bandwidth is enough for today's GPUs, as evidenced by the 2080ti scaling test.

The problem here is all tests (of which i'm aware of) were done with cards that have high VRAM to begin with: all with 8+GB VRAM. But what happens if you test the exact same cards while forcibly limiting the available VRAM they have (is that even possible??), thus forcing more PCIe communication? Isn't the *whole point* of these tests to *SEE if the PCIe bandwidth is enough for these cards*?

As such, the methodology used in current PCIe scaling tests is flawed because it's not really testing the PCIe's bandwidth but rather how much resorting to PCIe communication downgrades performance, *which IS SIMILAR, but not the same*. Why? Because the more VRAM you have, the less likely it is you'll need to communicate via PCIe.

@W1zzard : You may want to look into this and see what's possible in order to improve your PCIe scaling tests with future cards. I don't know if it's possible to forcibly limit a GPU's available VRAM but it seems to me the current test methodology doesn't exactly test what it's supposed to be testing and, as such, should be revised.


----------



## WHOFOUNDFUNGUS (Jan 2, 2020)

Well, it looks like I initiated another intense discussion. Thank you all for your input. I didn't expect so much intriguing dialog but as you were all combing over the final points about PCIe Gen 4 etc. I decided to get on with my build and do a little prepping before I put the abomination in the case. But that story belongs to another thread. HAPPY NEW YEAR


----------

