• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

No PCIe Gen5 for "Raphael," Says Gigabyte's Leaked Socket AM5 Documentation

Joined
Feb 20, 2019
Messages
8,265 (3.93/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
So we're looking at consumer AM5 platform having a total of 32 PCIe Gen4 lanes, potentially only 28 lanes on a board with USB4?

Definitely behind Intel's Gen5 but arguably still waaaay more than any consumer is going to need for the shelf-life of these first-gen AM5 CPUs and chipsets.
 

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
17,597 (2.41/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
:roll::roll::roll::roll::roll::roll:
I can include smiles too :p

My English are far from perfect, but I am pretty sure that "we didn't see manufacturers really using that standard" is somewhat different from "equals no support to you".
SSD manufacturers came up with PCIe 4.0 models that where not much better than PCIe 3.0 models, with the exception of sequential speeds. If I am not mistaken, Samsung didn't really rushed to come out with PCIe 4.0 solutions either. And as for PCIe 4.0 with graphics cards, it's just normal for the manufacturers to use the latest version of the bus, but my point was that 4.0 doesn't offer much compared to 3.0 version in 3D performance, something that is usually the case in all those last years. Not just this last generation.

Now my point exactly is that. Now that Intel will come out with PCIe 5.0 support, we will see how difficult it is to create PCIe 5.0 SSDs that are clearly faster than previous generations SSDs in about every benchmark. I wouldn't remember your post by then, I hope you would and you would mention me either to tell me that I was wrong, or either to tell me that probably I was right.
About that AQC113. Was it available after AM4 came out or after Intel supported PCIe 4.0? If it was way before Intel supported 4.0, then we have an example. But still I believe it is an exception, not the rule.

Finally, more smiles!!!!!
:peace::clap::pimp::toast::laugh::roll::love:
Came up with? I guess you don't understand how NAND flash works if that's how simple you think it is.
The issue with SSDs are not the controllers, but the NAND flash. We should see some improvements as the NAND flash makers stack more layers, but even so, the technology is quite limited if you want faster random speeds and that's one of the reasons Intel was working on 3D XPoint memory with Micron, was later became Optane, no? It might not quite have worked out, but consumer SSDs are using a type of flash memory that was never really intended for what it's being used as today, yet it has scaled amazingly in what is just over a decade of SSDs taking over from spinning rust in just about every computer you can buy today.

Also, do you have any idea how long it takes to develop any kind of "chip" used in a modern computer or other device? It's not something you throw together in five minutes. In all fairness, the only reason there was a PCIe 4.0 NVMe controller so close to AMD's X570 launch, was because AMD went to Phison and asked them to make something and gave them some cash to do it. It was what I'd call "cobbled together" as it ran hot, it was technically a PCIe 3.0, but with a PCIe 4.0 bus strapped on to it. Hence why it performed as it did. It was also produced on a node that wasn't really meant for something to handle the amount of data that the PCIe 4.0 can deliver, so it ran hot as anything.

How long have we used PCIe 3.0 and how long did it take until GPUs took advantage of the bus? We pretty much had to get to an GTX 1080 for it to make a difference against PCIe 1.1 at 16 lanes, based on testing by TPU. So we're obviously going to see a similar slow transition, unless something major happens in GPU design where they can take more of an advantage of the bus. So obviously the generational difference is going to be even smaller with 4.0 and 5.0 as long as everything else stays the same.

Did you even read the stuff that was announced these past few days? Intel will only have PCIe 5.0 for the PEG slot initially, so you won't see any SSD support for their first consumer platform, which makes this argument completely moot. So maybe end of 2022 or sometime in 2023 we'll see the first PCIe 5.0 consumer SSDs.

The AQC113 was launched a couple of months ago, but why does it matter if Intel supported PCIe 4.0 or not? We're obviously going to see a wider move towards PCIe 4.0 for many devices, of which one of the first will be USB4 host controllers, as you can see above. I don't understand why you think that only Intel can push the ecosystem forward, as they're far from the single driving force in the computer industry, as not all devices are meant for PC use only. There are plenty of ARM based server processors with PCIe 4.0 and Intel isn't even the first company with PCIe 5.0 in their processors.

I think you need to broaden your horizons a bit before making a bunch of claims about things you have limited knowledge about.

Motherboard prices went way up when Gen 4 was implemented. Scary to think how much Z690 is going to cost with a bleeding edge Gen 5 implementation. And all for something which is of no use for 99.99% of people who'll buy these desktop CPUs. Sticking with Gen 4 seems like the right move in the medium term.
For several reasons, not just because of PCIe 4.0. As for Z690, the singular PCIe 5.0 slot isn't likely to increase cost in and of itself, as it shouldn't be far enough away from the CPU to require redrivers or any other kind of signal conditioners. The only potential thing that might increase costs marginally would be in case there needs to be a change to even higher quality PCB materials to reduce noise, but I don't think that would be the case compared to a PCIe 4.0 board, as this was one of the other changes compared to PCIe 3.0 boards that increased costs.

I think integrated components like Ethernet controllers is one of the more realistic places to see this come true, as they don't have to care as much about backwards compatibility. With SSDs on the other hand, I would love to see 4.0 x2 NVMe become a thing for fast, cheap(er) drives, but given that those would be limited to 3.0 x2 on older boards/chipsets and thus lose significant performance (at least in bechmarks) and would be pissed on by literally every reviewer out there for this ("yes, it's a good drive, but it sacrifices performance on older computers for no good reason" alongside the inevitable value comparison with a just-launched MSRP product vs. one that has 6-12 months of gradual price drops behind it), I don't see it happening, even if they would be great drives and would allow for more storage slots per motherboard at the same or potentially lower board costs.

That's the problem with adopting new standards in open ecosystems with large install bases - if you break compatibility, you shrink your potential customer base dramatically.

Overall, I think this is a very good choice on AMD's part. The potential benefits of this are so far in the future that it wouldn't make sense, and the price increase for motherboards (and development costs for CPUs and chipsets) would inevitably be passed on to customers. No thanks. PCIe 4.0 is plenty fast. I'd like more options to bifurcate instead, but 5.0 can wait for quite a few years.
I guess the problem with SSDs is that PCIe 4.0 x2 gives you about the same performance as PCIe 3.0 x4 and the cost of developing a new controller is higher than continuing to use the already developed products that deliver similar performance. There ought to be benefits in things like notebooks though, as they have limited PCIe lanes. As PCIe 4.0 x2 would be considered a "budget" drive, it seems like the few controllers that are available, are DRAM-less, which isn't all that interesting to me at least.

As you point out, it's kind of frustrating that the PCIe interface couldn't have been designed in a way that it hard "forward" compatibility too, in the sense that a PCIe 4.0 x2 device plugged into a PCIe 3.0 x4 interface could utilise the full bandwidth available, instead of only PCIe 3.0 x2.

That's also why we're stuck with the ATX motherboard standard, even though it's not really fit for purpose any more and barely resembles what it did when it launched. That said, I do not miss AT motherboards...

It's always nice with new, faster interfaces, but in terms of PC technology, we've only just gotten a new standard and outside of NVMe storage, we're just scratching the surface of what PCIe 4.0 can offer. Maybe AMD did a "bad choice" by going with PCIe 4.0 if it's really set to be replaced by PCIe 5.0 so soon, but I highly doubt it, mainly due to the limitations of the trace lengths. PCIe 3.0 is the only standard that doesn't need additional redrivers to function over any kind of distance, at least until someone works out a better way of making motherboards. PCIe 4.0 is just about manageable, but 5.0 is going to have much bigger limitations and 6.0 might be even worse.

You'd need a different PSU, wouldn't you? Regardless, the motherboard manufacturers were pushing back against that standard, last I heard.
Motherboard manufacturers unite against Intel's efficient PSU plans | PC Gamer
Though, take it with a grain of salt, as always.
Actually, you can get a simple adapter from a current PSU to the ATX12VO connector, you just need to discount the 5V and 3.3V when you look at the total Wattage of your PSU.
As such, you might have to discount for 100W or so from the total output of the PSU.
 
Last edited:
Joined
Nov 7, 2016
Messages
159 (0.05/day)
Processor 5950X
Motherboard Dark Hero
Cooling Custom Loop
Memory Crucial Ballistix 3600MHz CL16
Video Card(s) Gigabyte RTX 3080 Vision
Storage 980 Pro 500GB, 970 Evo Plus 500GB, Crucial MX500 2TB, Crucial MX500 2TB, Samsung 850 Evo 500GB
Display(s) Gigabyte G34WQC
Case Cooler Master C700M
Audio Device(s) Bose
Power Supply AX850
Mouse Razer DeathAdder Chroma
Keyboard MSI GK80
Software W10 Pro
Benchmark Scores CPU-Z Single-Thread: 688 Multi-Thread: 11940
I'm still unable to buy the PCIe 4.0 x16 for my graphic card, have to set the PCIe lane to Gen 3, otherwise the system simply won't boot.
 
Joined
Jan 25, 2018
Messages
5 (0.00/day)
That's nice, they has found the spec. for Rembrandt ;-)

AMD-Zen-Roadmap-1024x683.png


Source: https://www.cpu-rumors.com/amd-cpu-roadmap/
 
Joined
Jan 8, 2017
Messages
9,434 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
If development is still stuck in the early 2010s, that's on developers, not the availability of fast interfaces.
You can't make changes without horrible consequence in this case. If you make a game and someone happens to use a terrible graphics card, that's fine, they just decrease the resolution or something like that. But if the bottleneck comes from insufficiently fast CPU-GPU-storage communication, you're done, there is nothing you can do. So developers can't change the way they write software if the hardware isn't already widespread.
 
Joined
Sep 1, 2020
Messages
2,343 (1.52/day)
Location
Bulgaria
Uh ... are PCIe 3.0 and 4.0 slow? If so, why are we nowhere near making proper use of them? What you're saying would have made sense if we were still stuck on SATA. With PCIe 3.0 NVMe drives being common for half a decade, this is nowhere near accurate. If development is still stuck in the early 2010s, that's on developers, not the availability of fast interfaces.
(Yes, obviously the need for flash parallelism and the cost inherent to this is also an issue restricting performance, but most decent 3.0 drives can maintain random 4k speeds far above what most applications will ever come close to making use of.)


Sure, but what does that have to do with motherboard prices? Last I checked, the PSU is separate ;)

And I would expect motherboard makers to push back - they're a notoriously conservative lot (plus are corporations producing largely commodity products with low margins under late-stage capitalism), and have no interest in taking on the expenditure of adapting to new standards just because they would benefit consumers, the environment, etc. My hope: that OEMs already using 12VO-like proprietary solutions shift over to 12VO, leading to opportunities for slow and steady growth in replacement PSUs, opening the door for niche motherboards as well. 12VO would be great for ITX boards, saving potentially a lot of board space with only the 10-pin connector needed (and the necessary buck converters being small and quite flexible in where they are placed). But I don't have any real hope for this becoming widely available in the next 5+ years, sadly.
Why we stuck on USB 2.0? Why software run on modern hardware like Russian Lada 40 years old? To be compatible, compatible, compatible with old scrap.
 

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
17,597 (2.41/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
Why we stuck on USB 2.0? Why software run on modern hardware like Russian Lada 40 years old? To be compatible, compatible, compatible with old scrap.
That is once again an opinion and in this case I would call it flawed.
Many old interfaces are long gone, some could've disappeared years ago, but didn't due to the fact that they were more cost efficient rather than more modern interfaces. Look at the humble D-Sub VGA connector, it's only really disappeared off of monitors with resolutions higher than the interface is capable of using, i.e. north of 2048x1536. In some ways, it should've disappeared with the introduction of of DVI and DFP, but DFP made it into obscurity long before the VGA connector did. Logic doesn't always apply to these things, neither does compatibility sometimes, as there has been a lot of weird, proprietary connectors over the years, especially courtesy of Apple. At one point I had an old Sun Microsystems display for my PC that connected via five BNC connector to a standard D-Sub VGA connector, much like you can connect to a DVI-I display with an HDMI to DVI adapter. I'm not sure I would call that compatibility, more like a dirty hack to make old hardware work with a new interface. This is also why we have so many different adapters between various standards. I guess more recently we can thank Apple for all the various dongles and little hubs that are required to make a Mac work with even the most rudimentary interfaces, due to their choice of going with the Type-C connectors on all their laptops (I don't own any Apple products).

As for USB 2.0, well, most things we use on an everyday basis doesn't really need a faster interface, I mean, what benefit do you get from having a mouse or a keyboard connect over USB 3.x? Also, if you look at the design of a USB 3.x host controller, the USB 2.0 part is separate from the USB 3.x part, so technically they're two separate standards rolled into one.
upd720201.png


Just be glad you don't work with anything embedded or industrial, those things all still use RS-422, RS-485, various parallel busses and what not, as that's where compatibility really matters, but it's been pushed to the extreme in some cases where more modern solutions are shunned, just because. Some of it obviously comes down to mechanical stability as well, as I doubt some more modern interfaces would survive on a factory floor.
 
Last edited:
Joined
Dec 29, 2010
Messages
3,809 (0.75/day)
Processor AMD 5900x
Motherboard Asus x570 Strix-E
Cooling Hardware Labs
Memory G.Skill 4000c17 2x16gb
Video Card(s) RTX 3090
Storage Sabrent
Display(s) Samsung G9
Case Phanteks 719
Audio Device(s) Fiio K5 Pro
Power Supply EVGA 1000 P2
Mouse Logitech G600
Keyboard Corsair K95
This isn't a big deal and definitely not a reason to go one architecture vs another. No need to shed tears.
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
You can't make changes without horrible consequence in this case. If you make a game and someone happens to use a terrible graphics card, that's fine, they just decrease the resolution or something like that. But if the bottleneck comes from insufficiently fast CPU-GPU-storage communication, you're done, there is nothing you can do. So developers can't change the way they write software if the hardware isn't already widespread.
So what you are saying, then, are that adoption cycles for new interfaces are long. Longer than five years. Which I agree with. But that just underscores my point: we already have plenty of fast interfaces that are yet to be utilized to even close to their full potential. Even PCIe 3.0 has lots left in the tank in most regards. And then we have 4.0 that both vendors now have, and that's almost entirely unutilized even on the hardware side. So, if adoption cycles are more than five years, and we have one established but underutilized and one nascent standard in the market already, what on earth is the value of pushing for another?

You brought this argument up as a way of there being potential consumer (and not just enterprise) benefits in PCIe 5.0. But the fact that PCIe 3.0 and 4.0 still aren't fully utilized entirely undermines that point. For there to be consumer value in 5.0, we would first need to be held back by current interfaces. We are not.


Also: what you're saying here isn't actually correct. Whether the bottleneck is the GPU's inability to process sufficient amounts of data or the interfaces' inability to transfer sufficient amounts of data, both can be alleviated (obviously to different degrees) by reducing the amount of data present in these operations. Reducing texture sizes or introducing GPU decompression (like DirectStorage does) reduces bandwidth requirements. This is of course dependent on a huge number of factors, but the same applies to graphics settings and whether your GPU is sufficiently powerful. It might be more difficult to scale for interface bandwidth, but on the flip side nobody is actually doing that (or programming bandwidth-aware games, at least AFAIK), which begs the question of what could be done if this was actually addressed. Just because games today lack options explicitly labeled and designed to alleviate such bottlenecks doesn't mean that such options are impossible to implement.
 

eidairaman1

The Exiled Airman
Joined
Jul 2, 2007
Messages
42,091 (6.63/day)
Location
Republic of Texas (True Patriot)
System Name PCGOD
Processor AMD FX 8350@ 5.0GHz
Motherboard Asus TUF 990FX Sabertooth R2 2901 Bios
Cooling Scythe Ashura, 2×BitFenix 230mm Spectre Pro LED (Blue,Green), 2x BitFenix 140mm Spectre Pro LED
Memory 16 GB Gskill Ripjaws X 2133 (2400 OC, 10-10-12-20-20, 1T, 1.65V)
Video Card(s) AMD Radeon 290 Sapphire Vapor-X
Storage Samsung 840 Pro 256GB, WD Velociraptor 1TB
Display(s) NEC Multisync LCD 1700V (Display Port Adapter)
Case AeroCool Xpredator Evil Blue Edition
Audio Device(s) Creative Labs Sound Blaster ZxR
Power Supply Seasonic 1250 XM2 Series (XP3)
Mouse Roccat Kone XTD
Keyboard Roccat Ryos MK Pro
Software Windows 7 Pro 64
So wheres the leaked info from intel?
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
That's nice, they has found the spec. for Rembrandt ;-)

View attachment 213439

Source: https://www.cpu-rumors.com/amd-cpu-roadmap/
Rembrandt looks like it will trigger a wave of extremely attractive thin-and-lights. Looking forward to that for sure. Kind of surprised about the Zen3+ marking though - doesn't that mean stacked V-cache? Didn't think we'd be seeing that in mobile.

So wheres the leaked info from intel?
Lots of options:
- It's boring and they didn't post it because it's boring
- It's exciting and they're holding it for later
- They're AMD fans and want to spread AMD hype
- They're Intel fans and want to hurt AMD by spoiling its plans
- Intel paid them off to not leak it
- Intel is behind the hack

Yes, that last option is rather tongue-in-cheek :p
 
Joined
Sep 6, 2013
Messages
3,328 (0.81/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 5500 / Ryzen 5 4600G / FX 6300 (12 years latter got to see how bad Bulldozer is)
Motherboard MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2) / Gigabyte GA-990XA-UD3
Cooling Νoctua U12S / Segotep T4 / Snowman M-T6
Memory 32GB - 16GB G.Skill RIPJAWS 3600+16GB G.Skill Aegis 3200 / 16GB JUHOR / 16GB Kingston 2400MHz (DDR3)
Video Card(s) ASRock RX 6600 + GT 710 (PhysX)/ Vega 7 integrated / Radeon RX 580
Storage NVMes, ONLY NVMes/ NVMes, SATA Storage / NVMe boot(Clover), SATA storage
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) ---- 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / CoolerMaster Elite 361 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Software Windows 10 / Windows 10&Windows 11 / Windows 10
Came up with? I guess you don't understand how NAND flash works if that's how simple you think it is.
The issue with SSDs are not the controllers, but the NAND flash. We should see some improvements as the NAND flash makers stack more layers, but even to, the technology is quite limited if you want faster random speeds and that's one of the reasons Intel was working on 3D XPoint memory with Micron, was later became Optane, no? It might not quite have worked out, but consumer SSDs are using a type of flash memory that was never really intended for what it's being used as today, yet it has scaled amazingly in what is just over a decade of SSDs taking over from spinning rust in just about every computer you can buy today.

Also, do you have any idea how long it takes to develop any kind of "chip" used in a modern computer or other device? It's not something you throw together in five minutes. In all fairness, the only reason there was a PCIe 4.0 NVMe controller so close to AMD's X570 launch, was because AMD went to Phison and asked them to make something and gave them some cash to do it. It was what I'd call "cobbled together" as it ran hot, it was technically a PCIe 3.0, but with a PCIe 4.0 bus strapped on to it. Hence why it performed as it did. It was also produced on a node that wasn't really meant for something to handle the amount of data that the PCIe 4.0 can deliver, so it ran hot as anything.

How long have we used PCIe 3.0 and how long did it take until GPUs took advantage of the bus? We pretty much had to get to an GTX 1080 for it to make a difference against PCIe 1.1 at 16 lanes, based on testing by TPU. So we're obviously going to see a similar slow transition, unless something major happens in GPU design where they can take more of an advantage of the bus. So obviously the generational difference is going to be even smaller with 4.0 and 5.0 as long as everything else stays the same.

Did you even read the stuff that was announced these past few days? Intel will only have PCIe 5.0 for the PEG slot initially, so you won't see any SSD support for their first consumer platform, which makes this argument completely moot. So maybe end of 2022 or sometime in 2023 we'll see the first PCIe 5.0 consumer SSDs.

The AQC113 was launched a couple of months ago, but why does it matter if Intel supported PCIe 4.0 or not? We're obviously going to see a wider move towards PCIe 4.0 for many devices, of which one of the first will be USB4 host controllers, as you can see above. I don't understand why you think that only Intel can push the ecosystem forward, as they're far from the single driving force in the computer industry, as not all devices are meant for PC use only. There are plenty of ARM based server processors with PCIe 4.0 and Intel isn't even the first company with PCIe 5.0 in their processors.

I think you need to broaden your horizons a bit before making a bunch of claims about things you have limited knowledge about.
Was it so difficult? THIS is a reply. Not that other post where you where even misinterpreted what I wrote, only to say nothing.
I am ignoring in this post some attitude, especially that last line. It's understandable. It probably also made you feel nice.
I am also not totally agreeing with some parts. Other parts just say what I already wrote. But it is a very nice nice reply.
As for ARQ113, they probably got payed from Intel, I mean same case as AMD and Phison if that story is true, or simply they decided that the userbase is enough for them with Intel in the PCIe 4.0 game.
 
Joined
Jan 8, 2017
Messages
9,434 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Also: what you're saying here isn't actually correct. Whether the bottleneck is the GPU's inability to process sufficient amounts of data or the interfaces' inability to transfer sufficient amounts of data, both can be alleviated (obviously to different degrees) by reducing the amount of data present in these operations.

One is a soft limit the other one is a hard limit. How do you alleviate the performance issues of a game that are the direct result from the fact that it sends too much information back and forth between the CPU and GPU ? You rewrite the game engine and all the logic ? Because that's about the only thing that you can do, there is no slider that you can add to adjust for that. And in fact that's what developers do and so as a result games have to be written from the get go to underutilize that interface. Same thing happened with hard drives, games have been written with slow HDDs in mind for a long time, that's why SSDs made no difference whatsoever despite them being order of magnitude better in access times and raw IOPS. As a result people assumed that there was no point in having faster storage for games other than to minimize load times, but of course that wasn't true, there really are things that you can't do from a performance stand point with slow storage. It's the same story here.

If you can't see why these things impose hard restrictions on performance and why you need to be sure that your customers have the required hardware first before you change the way you write the software then there is nothing I can add to convince you.
 

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
17,597 (2.41/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
As for ARQ113, they probably got payed from Intel, I mean same case as AMD and Phison if that story is true, or simply they decided that the userbase is enough for them with Intel in the PCIe 4.0 game.
PCI Express is a standard that anyone can license, Intel doesn't hold any specific patents to it.
Please see:
 
Joined
Sep 6, 2013
Messages
3,328 (0.81/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 5500 / Ryzen 5 4600G / FX 6300 (12 years latter got to see how bad Bulldozer is)
Motherboard MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2) / Gigabyte GA-990XA-UD3
Cooling Νoctua U12S / Segotep T4 / Snowman M-T6
Memory 32GB - 16GB G.Skill RIPJAWS 3600+16GB G.Skill Aegis 3200 / 16GB JUHOR / 16GB Kingston 2400MHz (DDR3)
Video Card(s) ASRock RX 6600 + GT 710 (PhysX)/ Vega 7 integrated / Radeon RX 580
Storage NVMes, ONLY NVMes/ NVMes, SATA Storage / NVMe boot(Clover), SATA storage
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) ---- 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / CoolerMaster Elite 361 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Software Windows 10 / Windows 10&Windows 11 / Windows 10
PCI Express is a standard that anyone can license, Intel doesn't hold any specific patents to it.
Please see:
I am just using YOUR example with AMD and Phison. Intel could be asking companies to create more stuff to take advantage of a feature it offers in it's latest platform. Of course there could be other reasons, as I mentioned before. But the fact is that we haven't seen a huge plethora of PCIe 4.0 products, and more importantly, products that really take advantage of the higher bandwidth of PCIe 4.0. Let's see if this repeats with PCIe 5.0.
 
Joined
Jul 9, 2015
Messages
3,413 (1.00/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
Thankfully, memory is one area where AMD will maintain parity with Intel, as Socket AM5 is being designed for dual-channel DDR5.
OMG, so cool that at least at something AMD "will maintain parity" with unseen Intel greatness, I'm so excited, OMG.
 

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
17,597 (2.41/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
I am just using YOUR example with AMD and Phison. Intel could be asking companies to create more stuff to take advantage of a feature it offers in it's latest platform. Of course there could be other reasons, as I mentioned before. But the fact is that we haven't seen a huge plethora of PCIe 4.0 products, and more importantly, products that really take advantage of the higher bandwidth of PCIe 4.0. Let's see if this repeats with PCIe 5.0.
My example? It wasn't an example, it's what happened.
Why would Intel pay companies to make things that would compete with Intel products? That makes no sense at all.

Again, the reasons why we haven't seen a huge amount of products is because 1. it takes time to develop 2. it would most likely be something made on a fairly cutting edge node and as you surely know, there's limited fab space and 3. a lot of things simply don't need PCIe 4.0.

There are already PCIe 5.0 SSDs in the making, but not for you or me.

Expect it to take even longer for consumer PCIe 5.0 devices to appear compared to PCIe 4.0.

Honestly though, I really don't get you, you keep going on and on about something without even trying to, or wanting to understand how the industry works. It's really quite annoying.

Oh and you can find all certified PCIe 4.0 devices here. It looks like quite a few to me, it's just that most of them aren't for consumers.
 
Joined
Mar 10, 2010
Messages
11,878 (2.21/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
PCI Express is a standard that anyone can license, Intel doesn't hold any specific patents to it.
Please see:
Do you not think AMD might implement a GenZ ccix connector after pciex 4 just because of that License.

After all AMD have before updated pciex while retaining the same CPU socket, it's a good inflection point to introduce a new protocol likely pciex comformable and supporting in nature.
 
Joined
Sep 6, 2013
Messages
3,328 (0.81/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 5500 / Ryzen 5 4600G / FX 6300 (12 years latter got to see how bad Bulldozer is)
Motherboard MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2) / Gigabyte GA-990XA-UD3
Cooling Νoctua U12S / Segotep T4 / Snowman M-T6
Memory 32GB - 16GB G.Skill RIPJAWS 3600+16GB G.Skill Aegis 3200 / 16GB JUHOR / 16GB Kingston 2400MHz (DDR3)
Video Card(s) ASRock RX 6600 + GT 710 (PhysX)/ Vega 7 integrated / Radeon RX 580
Storage NVMes, ONLY NVMes/ NVMes, SATA Storage / NVMe boot(Clover), SATA storage
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) ---- 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / CoolerMaster Elite 361 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Software Windows 10 / Windows 10&Windows 11 / Windows 10
My example? It wasn't an example, it's what happened.
Why would Intel pay companies to make things that would compete with Intel products? That makes no sense at all.
I guess your arguments are only valid when you use them, not when others use them.

Again, the reasons why we haven't seen a huge amount of products is because 1. it takes time to develop 2. it would most likely be something made on a fairly cutting edge node and as you surely know, there's limited fab space and 3. a lot of things simply don't need PCIe 4.0.

There are already PCIe 5.0 SSDs in the making, but not for you or me.

Expect it to take even longer for consumer PCIe 5.0 devices to appear compared to PCIe 4.0.
let me repeat my self.
We will see if development of PCIe 5.0 products ends up, slower, at the same pace or faster compared to PCIe 4.0. I believe it will be (much) faster.

Honestly though, I really don't get you, you keep going on and on about something without even trying to, or wanting to understand how the industry works. It's really quite annoying.
Your attitude is also annoying, but I am not complaining.

Oh and you can find all certified PCIe 4.0 devices here. It looks like quite a few to me, it's just that most of them aren't for consumers.
Oh, how nice!
 

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
17,597 (2.41/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
let me repeat my self.
We will see if development of PCIe 5.0 products ends up, slower, at the same pace or faster compared to PCIe 4.0. I believe it will be (much) faster.
Intel had PCIe 5.0 devices in 2019, not that it matters, since there's nothing to plug in to it, just like their upcoming desktop platform.

I really doubt it'll be any faster, but you're refusing to understand what I've mentioned, so I give up. Bye bye.

Do you not think AMD might implement a GenZ ccix connector after pciex 4 just because of that License.

After all AMD have before updated pciex while retaining the same CPU socket, it's a good inflection point to introduce a new protocol likely pciex comformable and supporting in nature.

AMD is a board member of the PCI-SIG, since PCIe is what everything from a Raspberry Pi 4 CM to Annapurna's custom server chips for Amazon uses.
Unless there's an industry wide move to something else, I think we're going to keep using PCIe for now.
We're obviously going to be switching to something different at one point, but we're absolutely not at a point where PCIe is getting useless in most devices.
I'm sure we'll see very high-end server platforms switch to something else in the near future, but a regular PC doesn't have multiple CPU sockets or FPGA cards for real-time computational tasks, so the requirements for a wider bus simply isn't there yet.
CCIX is unlikely to ever end up in consumer platforms, but Gen-Z/CXL might (AMD is in both camps). I also have a feeling, as with so many past standards, that whatever becomes the de facto standard, will end up being managed by the PCI-SIG. They've taken over a lot of standards, like PCIe, M.2 etc.
 
Last edited:
Joined
Apr 11, 2021
Messages
214 (0.16/day)
This article implies that when AMD made the switch to PCIe 4.0, it is comparable to this situation, when that's hardly the case considering PCIe 3.0 was released in 2010 and the first PCIe 4.0 motherboards were released in 2019....that's nine years, whereas PCIe 4.0 has only been around for approximately two years and hasn't even been fully saturated yet by a GPU.
Actually, I do think that it's comparable, PCIe Gen 4 was/is just as unnecessary for nearly every consumer as PCIe Gen 5 is, whether the specifications were finalised in 2011 or 2017 makes little difference. The actual difference is that Intel is offering Gen 5 only on the x16 lane connection, making its value even more questionable, unless Intel plans to release the PCIe Gen 5 equivalent of the perplexing RX 6600 XT...
 
Last edited:
Joined
Dec 16, 2017
Messages
2,912 (1.15/day)
System Name System V
Processor AMD Ryzen 5 3600
Motherboard Asus Prime X570-P
Cooling Cooler Master Hyper 212 // a bunch of 120 mm Xigmatek 1500 RPM fans (2 ins, 3 outs)
Memory 2x8GB Ballistix Sport LT 3200 MHz (BLS8G4D32AESCK.M8FE) (CL16-18-18-36)
Video Card(s) Gigabyte AORUS Radeon RX 580 8 GB
Storage SHFS37A240G / DT01ACA200 / ST10000VN0008 / ST8000VN004 / SA400S37960G / SNV21000G / NM620 2TB
Display(s) LG 22MP55 IPS Display
Case NZXT Source 210
Audio Device(s) Logitech G430 Headset
Power Supply Corsair CX650M
Software Whatever build of Windows 11 is being served in Canary channel at the time.
Benchmark Scores Corona 1.3: 3120620 r/s Cinebench R20: 3355 FireStrike: 12490 TimeSpy: 4624
Actually, I do think that it's comparable, PCIe Gen 4 was/is just as unnecessary for nearly every consumer just as PCIe Gen 5 is, whether the specifications were finalised in 2011 or 2017 makes little difference. The actual difference is that Intel is offering Gen 5 only on the x16 lane connection, making its value even more questionable, unless Intel plans to release the PCIe Gen 5 equivalent of the perplexing RX 6600 XT...
On that, is it possible to use the x16 lane connection for something other than a graphics card? Because at least that way it would make a smidge of sense. Otherwise it's just completely idiotic.
 
Joined
Apr 11, 2021
Messages
214 (0.16/day)
On that, is it possible to use the x16 lane connection for something other than a graphics card? Because at least that way it would make a smidge of sense. Otherwise it's just completely idiotic.
I guess it could get used for an updated Hyper M2 card? But it's anyone's wonder when something similar will get released; the first, enterprise grade, PCIe Gen 5 SSDs are planned for Q2 2022 at any rate, so it's going to be a while before we get commercial M2 solutions.
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
On that, is it possible to use the x16 lane connection for something other than a graphics card? Because at least that way it would make a smidge of sense. Otherwise it's just completely idiotic.
Yeah, AnandTech speculated on whether it might be aimed for an x8+x4+x4 bifurcated setup with GPU+2xNVMe. Though IMO, it's just a "first!!1!!!1!!!!1111!" spec sheet addition to show that they're adopting new tech, which ... come on. Aren't the new cores (which look very good!) and DDR5 (which will be useful for iGPUs if nothing else) enough? I guess it's "free" in that they already have the blocks from their enterprise hardware development, but it would make more sense to make the x4 NVMe link (and maybe the chipset uplink too?) 5.0 than the PEG slot. It's a pretty weird decision.

One is a soft limit the other one is a hard limit. How do you alleviate the performance issues of a game that are the direct result from the fact that it sends too much information back and forth between the CPU and GPU ? You rewrite the game engine and all the logic ? Because that's about the only thing that you can do, there is no slider that you can add to adjust for that. And in fact that's what developers do and so as a result games have to be written from the get go to underutilize that interface. Same thing happened with hard drives, games have been written with slow HDDs in mind for a long time, that's why SSDs made no difference whatsoever despite them being order of magnitude better in access times and raw IOPS. As a result people assumed that there was no point in having faster storage for games other than to minimize load times, but of course that wasn't true, there really are things that you can't do from a performance stand point with slow storage. It's the same story here.

If you can't see why these things impose hard restrictions on performance and why you need to be sure that your customers have the required hardware first before you change the way you write the software then there is nothing I can add to convince you.
It's debatable how hard this limit is - sure, there's less flexibility than there is with pure graphics quality scaling, but saying there is zero room for scaling in bandwidth requirements without making the game unplayable seems overly deterministic. The main bandwidth hog is texture data, right? So, reducing that will reduce bandwidth needs. DirectStorage will, at least at first (before the inevitable push for 128k textures now that we "have the bandwidth" I guess), reduce bandwidth needs. And so on. There's always flexibility. Just because the industry has up until now not been particularly limited in this regard (or have been bottlenecked elsewhere) and thus haven't had any incentive to put work into implementing scaling on this particular level doesn't mean it isn't possible.
 
Top