Friday, August 20th 2021

No PCIe Gen5 for "Raphael," Says Gigabyte's Leaked Socket AM5 Documentation

AMD might fall behind Intel on PCI-Express Gen 5 support, say sources familiar with the recent GIGABYTE ransomware attack and ensuing leak of confidential documents. If you recall, AMD had extensively marketed the fact that it was first-to-market with PCI-Express Gen 4, over a year ahead of Intel's "Rocket Lake" processor. The platform block-diagram for Socket AM5 states that the AM5 SoC puts out a total of 28 PCI-Express Gen 4 lanes. 16 of these are allocated toward PCI-Express discrete graphics, 4 toward a CPU-attached M.2 NVMe slot, another 4 lanes toward a discrete USB4 controller, and the remaining 4 lanes as chipset-bus.

Socket AM5 SoCs appear to have an additional 4 lanes to spare than the outgoing "Matisse" and "Vermeer" SoCs, which on higher-end platforms are used up by the USB4 controller, but can be left unused for the purpose, and instead wired to an additional M.2 NVMe slot on lower-end motherboards. Thankfully, memory is one area where AMD will maintain parity with Intel, as Socket AM5 is being designed for dual-channel DDR5. The other SoC-integrated I/O, as well as I/O from the chipset, appear to be identical to "Vermeer," with minor exceptions such as support for 20 Gbps USB 3.2x2. The Socket has preparation for display I/O for APUs from the generation. Intel's upcoming "Alder Lake-S" processor implements PCI-Express Gen 5, but only for the 16-lane PEG port. The CPU-attached NVMe slot, as well as downstream PCIe connectivity, are limited to PCIe Gen 4.
Add your own comment

118 Comments on No PCIe Gen5 for "Raphael," Says Gigabyte's Leaked Socket AM5 Documentation

#26
Hossein Almet
I'm still unable to buy the PCIe 4.0 x16 for my graphic card, have to set the PCIe lane to Gen 3, otherwise the system simply won't boot.
Posted on Reply
#28
Vya Domus
ValantarIf development is still stuck in the early 2010s, that's on developers, not the availability of fast interfaces.
You can't make changes without horrible consequence in this case. If you make a game and someone happens to use a terrible graphics card, that's fine, they just decrease the resolution or something like that. But if the bottleneck comes from insufficiently fast CPU-GPU-storage communication, you're done, there is nothing you can do. So developers can't change the way they write software if the hardware isn't already widespread.
Posted on Reply
#29
R0H1T
john_With Intel going to PCIe 5.0 we might see manufacturers
Because Intel "pays to play" ~ they literally pay a lot for many of these implementations!
Posted on Reply
#30
TumbleGeorge
ValantarUh ... are PCIe 3.0 and 4.0 slow? If so, why are we nowhere near making proper use of them? What you're saying would have made sense if we were still stuck on SATA. With PCIe 3.0 NVMe drives being common for half a decade, this is nowhere near accurate. If development is still stuck in the early 2010s, that's on developers, not the availability of fast interfaces.
(Yes, obviously the need for flash parallelism and the cost inherent to this is also an issue restricting performance, but most decent 3.0 drives can maintain random 4k speeds far above what most applications will ever come close to making use of.)


Sure, but what does that have to do with motherboard prices? Last I checked, the PSU is separate ;)

And I would expect motherboard makers to push back - they're a notoriously conservative lot (plus are corporations producing largely commodity products with low margins under late-stage capitalism), and have no interest in taking on the expenditure of adapting to new standards just because they would benefit consumers, the environment, etc. My hope: that OEMs already using 12VO-like proprietary solutions shift over to 12VO, leading to opportunities for slow and steady growth in replacement PSUs, opening the door for niche motherboards as well. 12VO would be great for ITX boards, saving potentially a lot of board space with only the 10-pin connector needed (and the necessary buck converters being small and quite flexible in where they are placed). But I don't have any real hope for this becoming widely available in the next 5+ years, sadly.
Why we stuck on USB 2.0? Why software run on modern hardware like Russian Lada 40 years old? To be compatible, compatible, compatible with old scrap.
Posted on Reply
#31
TheLostSwede
News Editor
TumbleGeorgeWhy we stuck on USB 2.0? Why software run on modern hardware like Russian Lada 40 years old? To be compatible, compatible, compatible with old scrap.
That is once again an opinion and in this case I would call it flawed.
Many old interfaces are long gone, some could've disappeared years ago, but didn't due to the fact that they were more cost efficient rather than more modern interfaces. Look at the humble D-Sub VGA connector, it's only really disappeared off of monitors with resolutions higher than the interface is capable of using, i.e. north of 2048x1536. In some ways, it should've disappeared with the introduction of of DVI and DFP, but DFP made it into obscurity long before the VGA connector did. Logic doesn't always apply to these things, neither does compatibility sometimes, as there has been a lot of weird, proprietary connectors over the years, especially courtesy of Apple. At one point I had an old Sun Microsystems display for my PC that connected via five BNC connector to a standard D-Sub VGA connector, much like you can connect to a DVI-I display with an HDMI to DVI adapter. I'm not sure I would call that compatibility, more like a dirty hack to make old hardware work with a new interface. This is also why we have so many different adapters between various standards. I guess more recently we can thank Apple for all the various dongles and little hubs that are required to make a Mac work with even the most rudimentary interfaces, due to their choice of going with the Type-C connectors on all their laptops (I don't own any Apple products).

As for USB 2.0, well, most things we use on an everyday basis doesn't really need a faster interface, I mean, what benefit do you get from having a mouse or a keyboard connect over USB 3.x? Also, if you look at the design of a USB 3.x host controller, the USB 2.0 part is separate from the USB 3.x part, so technically they're two separate standards rolled into one.


Just be glad you don't work with anything embedded or industrial, those things all still use RS-422, RS-485, various parallel busses and what not, as that's where compatibility really matters, but it's been pushed to the extreme in some cases where more modern solutions are shunned, just because. Some of it obviously comes down to mechanical stability as well, as I doubt some more modern interfaces would survive on a factory floor.
Posted on Reply
#32
thesmokingman
This isn't a big deal and definitely not a reason to go one architecture vs another. No need to shed tears.
Posted on Reply
#33
Valantar
Vya DomusYou can't make changes without horrible consequence in this case. If you make a game and someone happens to use a terrible graphics card, that's fine, they just decrease the resolution or something like that. But if the bottleneck comes from insufficiently fast CPU-GPU-storage communication, you're done, there is nothing you can do. So developers can't change the way they write software if the hardware isn't already widespread.
So what you are saying, then, are that adoption cycles for new interfaces are long. Longer than five years. Which I agree with. But that just underscores my point: we already have plenty of fast interfaces that are yet to be utilized to even close to their full potential. Even PCIe 3.0 has lots left in the tank in most regards. And then we have 4.0 that both vendors now have, and that's almost entirely unutilized even on the hardware side. So, if adoption cycles are more than five years, and we have one established but underutilized and one nascent standard in the market already, what on earth is the value of pushing for another?

You brought this argument up as a way of there being potential consumer (and not just enterprise) benefits in PCIe 5.0. But the fact that PCIe 3.0 and 4.0 still aren't fully utilized entirely undermines that point. For there to be consumer value in 5.0, we would first need to be held back by current interfaces. We are not.


Also: what you're saying here isn't actually correct. Whether the bottleneck is the GPU's inability to process sufficient amounts of data or the interfaces' inability to transfer sufficient amounts of data, both can be alleviated (obviously to different degrees) by reducing the amount of data present in these operations. Reducing texture sizes or introducing GPU decompression (like DirectStorage does) reduces bandwidth requirements. This is of course dependent on a huge number of factors, but the same applies to graphics settings and whether your GPU is sufficiently powerful. It might be more difficult to scale for interface bandwidth, but on the flip side nobody is actually doing that (or programming bandwidth-aware games, at least AFAIK), which begs the question of what could be done if this was actually addressed. Just because games today lack options explicitly labeled and designed to alleviate such bottlenecks doesn't mean that such options are impossible to implement.
Posted on Reply
#34
eidairaman1
The Exiled Airman
So wheres the leaked info from intel?
Posted on Reply
#35
Valantar
Bengt-ArneThat's nice, they has found the spec. for Rembrandt ;-)



Source: www.cpu-rumors.com/amd-cpu-roadmap/
Rembrandt looks like it will trigger a wave of extremely attractive thin-and-lights. Looking forward to that for sure. Kind of surprised about the Zen3+ marking though - doesn't that mean stacked V-cache? Didn't think we'd be seeing that in mobile.
eidairaman1So wheres the leaked info from intel?
Lots of options:
- It's boring and they didn't post it because it's boring
- It's exciting and they're holding it for later
- They're AMD fans and want to spread AMD hype
- They're Intel fans and want to hurt AMD by spoiling its plans
- Intel paid them off to not leak it
- Intel is behind the hack

Yes, that last option is rather tongue-in-cheek :P
Posted on Reply
#36
john_
TheLostSwedeCame up with? I guess you don't understand how NAND flash works if that's how simple you think it is.
The issue with SSDs are not the controllers, but the NAND flash. We should see some improvements as the NAND flash makers stack more layers, but even to, the technology is quite limited if you want faster random speeds and that's one of the reasons Intel was working on 3D XPoint memory with Micron, was later became Optane, no? It might not quite have worked out, but consumer SSDs are using a type of flash memory that was never really intended for what it's being used as today, yet it has scaled amazingly in what is just over a decade of SSDs taking over from spinning rust in just about every computer you can buy today.

Also, do you have any idea how long it takes to develop any kind of "chip" used in a modern computer or other device? It's not something you throw together in five minutes. In all fairness, the only reason there was a PCIe 4.0 NVMe controller so close to AMD's X570 launch, was because AMD went to Phison and asked them to make something and gave them some cash to do it. It was what I'd call "cobbled together" as it ran hot, it was technically a PCIe 3.0, but with a PCIe 4.0 bus strapped on to it. Hence why it performed as it did. It was also produced on a node that wasn't really meant for something to handle the amount of data that the PCIe 4.0 can deliver, so it ran hot as anything.

How long have we used PCIe 3.0 and how long did it take until GPUs took advantage of the bus? We pretty much had to get to an GTX 1080 for it to make a difference against PCIe 1.1 at 16 lanes, based on testing by TPU. So we're obviously going to see a similar slow transition, unless something major happens in GPU design where they can take more of an advantage of the bus. So obviously the generational difference is going to be even smaller with 4.0 and 5.0 as long as everything else stays the same.
www.techpowerup.com/review/nvidia-geforce-gtx-1080-pci-express-scaling/24.html

Did you even read the stuff that was announced these past few days? Intel will only have PCIe 5.0 for the PEG slot initially, so you won't see any SSD support for their first consumer platform, which makes this argument completely moot. So maybe end of 2022 or sometime in 2023 we'll see the first PCIe 5.0 consumer SSDs.

The AQC113 was launched a couple of months ago, but why does it matter if Intel supported PCIe 4.0 or not? We're obviously going to see a wider move towards PCIe 4.0 for many devices, of which one of the first will be USB4 host controllers, as you can see above. I don't understand why you think that only Intel can push the ecosystem forward, as they're far from the single driving force in the computer industry, as not all devices are meant for PC use only. There are plenty of ARM based server processors with PCIe 4.0 and Intel isn't even the first company with PCIe 5.0 in their processors.

I think you need to broaden your horizons a bit before making a bunch of claims about things you have limited knowledge about.
Was it so difficult? THIS is a reply. Not that other post where you where even misinterpreted what I wrote, only to say nothing.
I am ignoring in this post some attitude, especially that last line. It's understandable. It probably also made you feel nice.
I am also not totally agreeing with some parts. Other parts just say what I already wrote. But it is a very nice nice reply.
As for ARQ113, they probably got payed from Intel, I mean same case as AMD and Phison if that story is true, or simply they decided that the userbase is enough for them with Intel in the PCIe 4.0 game.
Posted on Reply
#37
Vya Domus
ValantarAlso: what you're saying here isn't actually correct. Whether the bottleneck is the GPU's inability to process sufficient amounts of data or the interfaces' inability to transfer sufficient amounts of data, both can be alleviated (obviously to different degrees) by reducing the amount of data present in these operations.
One is a soft limit the other one is a hard limit. How do you alleviate the performance issues of a game that are the direct result from the fact that it sends too much information back and forth between the CPU and GPU ? You rewrite the game engine and all the logic ? Because that's about the only thing that you can do, there is no slider that you can add to adjust for that. And in fact that's what developers do and so as a result games have to be written from the get go to underutilize that interface. Same thing happened with hard drives, games have been written with slow HDDs in mind for a long time, that's why SSDs made no difference whatsoever despite them being order of magnitude better in access times and raw IOPS. As a result people assumed that there was no point in having faster storage for games other than to minimize load times, but of course that wasn't true, there really are things that you can't do from a performance stand point with slow storage. It's the same story here.

If you can't see why these things impose hard restrictions on performance and why you need to be sure that your customers have the required hardware first before you change the way you write the software then there is nothing I can add to convince you.
Posted on Reply
#38
TheLostSwede
News Editor
john_As for ARQ113, they probably got payed from Intel, I mean same case as AMD and Phison if that story is true, or simply they decided that the userbase is enough for them with Intel in the PCIe 4.0 game.
PCI Express is a standard that anyone can license, Intel doesn't hold any specific patents to it.
Please see:
pcisig.com/membership
Posted on Reply
#39
john_
TheLostSwedePCI Express is a standard that anyone can license, Intel doesn't hold any specific patents to it.
Please see:
pcisig.com/membership
I am just using YOUR example with AMD and Phison. Intel could be asking companies to create more stuff to take advantage of a feature it offers in it's latest platform. Of course there could be other reasons, as I mentioned before. But the fact is that we haven't seen a huge plethora of PCIe 4.0 products, and more importantly, products that really take advantage of the higher bandwidth of PCIe 4.0. Let's see if this repeats with PCIe 5.0.
Posted on Reply
#40
medi01
btarunrThankfully, memory is one area where AMD will maintain parity with Intel, as Socket AM5 is being designed for dual-channel DDR5.
OMG, so cool that at least at something AMD "will maintain parity" with unseen Intel greatness, I'm so excited, OMG.
Posted on Reply
#41
TheLostSwede
News Editor
john_I am just using YOUR example with AMD and Phison. Intel could be asking companies to create more stuff to take advantage of a feature it offers in it's latest platform. Of course there could be other reasons, as I mentioned before. But the fact is that we haven't seen a huge plethora of PCIe 4.0 products, and more importantly, products that really take advantage of the higher bandwidth of PCIe 4.0. Let's see if this repeats with PCIe 5.0.
My example? It wasn't an example, it's what happened.
Why would Intel pay companies to make things that would compete with Intel products? That makes no sense at all.

Again, the reasons why we haven't seen a huge amount of products is because 1. it takes time to develop 2. it would most likely be something made on a fairly cutting edge node and as you surely know, there's limited fab space and 3. a lot of things simply don't need PCIe 4.0.

There are already PCIe 5.0 SSDs in the making, but not for you or me.
www.techpowerup.com/284334/samsung-teases-pcie-5-0-enterprise-ssd-coming-q2-2022
www.anandtech.com/show/16703/marvell-announces-first-pcie-50-nvme-ssd-controllers

Expect it to take even longer for consumer PCIe 5.0 devices to appear compared to PCIe 4.0.

Honestly though, I really don't get you, you keep going on and on about something without even trying to, or wanting to understand how the industry works. It's really quite annoying.

Oh and you can find all certified PCIe 4.0 devices here. It looks like quite a few to me, it's just that most of them aren't for consumers.
pcisig.com/developers/integrators-list?field_version_value%5B%5D=4&field_il_comp_product_type_value=All&keys=
Posted on Reply
#42
TheoneandonlyMrK
TheLostSwedePCI Express is a standard that anyone can license, Intel doesn't hold any specific patents to it.
Please see:
pcisig.com/membership
Do you not think AMD might implement a GenZ ccix connector after pciex 4 just because of that License.

After all AMD have before updated pciex while retaining the same CPU socket, it's a good inflection point to introduce a new protocol likely pciex comformable and supporting in nature.
Posted on Reply
#43
john_
TheLostSwedeMy example? It wasn't an example, it's what happened.
Why would Intel pay companies to make things that would compete with Intel products? That makes no sense at all.
I guess your arguments are only valid when you use them, not when others use them.
Again, the reasons why we haven't seen a huge amount of products is because 1. it takes time to develop 2. it would most likely be something made on a fairly cutting edge node and as you surely know, there's limited fab space and 3. a lot of things simply don't need PCIe 4.0.

There are already PCIe 5.0 SSDs in the making, but not for you or me.
www.techpowerup.com/284334/samsung-teases-pcie-5-0-enterprise-ssd-coming-q2-2022
www.anandtech.com/show/16703/marvell-announces-first-pcie-50-nvme-ssd-controllers

Expect it to take even longer for consumer PCIe 5.0 devices to appear compared to PCIe 4.0.
let me repeat my self.
We will see if development of PCIe 5.0 products ends up, slower, at the same pace or faster compared to PCIe 4.0. I believe it will be (much) faster.
Honestly though, I really don't get you, you keep going on and on about something without even trying to, or wanting to understand how the industry works. It's really quite annoying.
Your attitude is also annoying, but I am not complaining.
Oh and you can find all certified PCIe 4.0 devices here. It looks like quite a few to me, it's just that most of them aren't for consumers.
pcisig.com/developers/integrators-list?field_version_value%5B%5D=4&field_il_comp_product_type_value=All&keys=
Oh, how nice!
Posted on Reply
#44
TheLostSwede
News Editor
john_let me repeat my self.
We will see if development of PCIe 5.0 products ends up, slower, at the same pace or faster compared to PCIe 4.0. I believe it will be (much) faster.
Intel had PCIe 5.0 devices in 2019, not that it matters, since there's nothing to plug in to it, just like their upcoming desktop platform.
newsroom.intel.com/news/intel-driving-data-centric-world-new-10nm-intel-agilex-fpga-family/

I really doubt it'll be any faster, but you're refusing to understand what I've mentioned, so I give up. Bye bye.
TheoneandonlyMrKDo you not think AMD might implement a GenZ ccix connector after pciex 4 just because of that License.

After all AMD have before updated pciex while retaining the same CPU socket, it's a good inflection point to introduce a new protocol likely pciex comformable and supporting in nature.
AMD is a board member of the PCI-SIG, since PCIe is what everything from a Raspberry Pi 4 CM to Annapurna's custom server chips for Amazon uses.
Unless there's an industry wide move to something else, I think we're going to keep using PCIe for now.
We're obviously going to be switching to something different at one point, but we're absolutely not at a point where PCIe is getting useless in most devices.
I'm sure we'll see very high-end server platforms switch to something else in the near future, but a regular PC doesn't have multiple CPU sockets or FPGA cards for real-time computational tasks, so the requirements for a wider bus simply isn't there yet.
CCIX is unlikely to ever end up in consumer platforms, but Gen-Z/CXL might (AMD is in both camps). I also have a feeling, as with so many past standards, that whatever becomes the de facto standard, will end up being managed by the PCI-SIG. They've taken over a lot of standards, like PCIe, M.2 etc.
en.wikipedia.org/wiki/PCI-SIG
Posted on Reply
#45
RedBear
AnarchoPrimitivThis article implies that when AMD made the switch to PCIe 4.0, it is comparable to this situation, when that's hardly the case considering PCIe 3.0 was released in 2010 and the first PCIe 4.0 motherboards were released in 2019....that's nine years, whereas PCIe 4.0 has only been around for approximately two years and hasn't even been fully saturated yet by a GPU.
Actually, I do think that it's comparable, PCIe Gen 4 was/is just as unnecessary for nearly every consumer as PCIe Gen 5 is, whether the specifications were finalised in 2011 or 2017 makes little difference. The actual difference is that Intel is offering Gen 5 only on the x16 lane connection, making its value even more questionable, unless Intel plans to release the PCIe Gen 5 equivalent of the perplexing RX 6600 XT...
Posted on Reply
#46
windwhirl
RedBearActually, I do think that it's comparable, PCIe Gen 4 was/is just as unnecessary for nearly every consumer just as PCIe Gen 5 is, whether the specifications were finalised in 2011 or 2017 makes little difference. The actual difference is that Intel is offering Gen 5 only on the x16 lane connection, making its value even more questionable, unless Intel plans to release the PCIe Gen 5 equivalent of the perplexing RX 6600 XT...
On that, is it possible to use the x16 lane connection for something other than a graphics card? Because at least that way it would make a smidge of sense. Otherwise it's just completely idiotic.
Posted on Reply
#47
RedBear
windwhirlOn that, is it possible to use the x16 lane connection for something other than a graphics card? Because at least that way it would make a smidge of sense. Otherwise it's just completely idiotic.
I guess it could get used for an updated Hyper M2 card? But it's anyone's wonder when something similar will get released; the first, enterprise grade, PCIe Gen 5 SSDs are planned for Q2 2022 at any rate, so it's going to be a while before we get commercial M2 solutions.
Posted on Reply
#48
Valantar
windwhirlOn that, is it possible to use the x16 lane connection for something other than a graphics card? Because at least that way it would make a smidge of sense. Otherwise it's just completely idiotic.
Yeah, AnandTech speculated on whether it might be aimed for an x8+x4+x4 bifurcated setup with GPU+2xNVMe. Though IMO, it's just a "first!!1!!!1!!!!1111!" spec sheet addition to show that they're adopting new tech, which ... come on. Aren't the new cores (which look very good!) and DDR5 (which will be useful for iGPUs if nothing else) enough? I guess it's "free" in that they already have the blocks from their enterprise hardware development, but it would make more sense to make the x4 NVMe link (and maybe the chipset uplink too?) 5.0 than the PEG slot. It's a pretty weird decision.
Vya DomusOne is a soft limit the other one is a hard limit. How do you alleviate the performance issues of a game that are the direct result from the fact that it sends too much information back and forth between the CPU and GPU ? You rewrite the game engine and all the logic ? Because that's about the only thing that you can do, there is no slider that you can add to adjust for that. And in fact that's what developers do and so as a result games have to be written from the get go to underutilize that interface. Same thing happened with hard drives, games have been written with slow HDDs in mind for a long time, that's why SSDs made no difference whatsoever despite them being order of magnitude better in access times and raw IOPS. As a result people assumed that there was no point in having faster storage for games other than to minimize load times, but of course that wasn't true, there really are things that you can't do from a performance stand point with slow storage. It's the same story here.

If you can't see why these things impose hard restrictions on performance and why you need to be sure that your customers have the required hardware first before you change the way you write the software then there is nothing I can add to convince you.
It's debatable how hard this limit is - sure, there's less flexibility than there is with pure graphics quality scaling, but saying there is zero room for scaling in bandwidth requirements without making the game unplayable seems overly deterministic. The main bandwidth hog is texture data, right? So, reducing that will reduce bandwidth needs. DirectStorage will, at least at first (before the inevitable push for 128k textures now that we "have the bandwidth" I guess), reduce bandwidth needs. And so on. There's always flexibility. Just because the industry has up until now not been particularly limited in this regard (or have been bottlenecked elsewhere) and thus haven't had any incentive to put work into implementing scaling on this particular level doesn't mean it isn't possible.
Posted on Reply
#49
TheoneandonlyMrK
TheLostSwedeIntel had PCIe 5.0 devices in 2019, not that it matters, since there's nothing to plug in to it, just like their upcoming desktop platform.
newsroom.intel.com/news/intel-driving-data-centric-world-new-10nm-intel-agilex-fpga-family/

I really doubt it'll be any faster, but you're refusing to understand what I've mentioned, so I give up. Bye bye.



AMD is a board member of the PCI-SIG, since PCIe is what everything from a Raspberry Pi 4 CM to Annapurna's custom server chips for Amazon uses.
Unless there's an industry wide move to something else, I think we're going to keep using PCIe for now.
We're obviously going to be switching to something different at one point, but we're absolutely not at a point where PCIe is getting useless in most devices.
I'm sure we'll see very high-end server platforms switch to something else in the near future, but a regular PC doesn't have multiple CPU sockets or FPGA cards for real-time computational tasks, so the requirements for a wider bus simply isn't there yet.
CCIX is unlikely to ever end up in consumer platforms, but Gen-Z/CXL might (AMD is in both camps). I also have a feeling, as with so many past standards, that whatever becomes the de facto standard, will end up being managed by the PCI-SIG. They've taken over a lot of standards, like PCIe, M.2 etc.
en.wikipedia.org/wiki/PCI-SIG
Fair enough but it's not impossible to run two protocols over one connection and bus either.
Posted on Reply
#50
docnorth
Many people on this thread are traveling between interfaces, lanes, cost, speeds etc and miss the practical point. PCIe 4 will be the almost skipped generation (@AnarchoPrimitiv somehow pointed this out), mostly because Intel was unable to move from 14nm lithography and decided to jump directly to PCIe 5. RL is just a parenthesis for marketing reasons. Actually we are stuck, like Intel, on PCIe 3 for over a decade and we think it’s normal, because our NVMe is faster than sata SSD. But storage and RAM are much larger now, some newer games won’t even fit in a small SSD and consequently faster interfaces and connections will (as always) become a necessity. After a transition interval, late 2022 or early 2023 the adoption of PCIe 5 will be almost eruptive Imo.
AnarchoPrimitivEpyc Genoa will have PCIe 5.0, and that's actually where it's needed, but PCIe 5.0 seems to me to be a completely unnecessary marketing point for a consumer platform at this time (or even in a years time) that only has the potential to drive up costs. The overwhelming majority of people still don't even have a second gen PCIe 4.0 NVMe SSD, and we can all agree that the difference between a PCIe 3.0 NVMe SSD and a 4.0 SSD is imperceptible. This article implies that when AMD made the switch to PCIe 4.0, it is comparable to this situation, when that's hardly the case considering PCIe 3.0 was released in 2010 and the first PCIe 4.0 motherboards were released in 2019....that's nine years, whereas PCIe 4.0 has only been around for approximately two years and hasn't even been fully saturated yet by a GPU.
Maybe indirectly, but you nailed it. PCIe 4 will be the almost skipped (or lost) generation.
Posted on Reply
Add your own comment
Nov 21st, 2024 12:35 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts