• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel's "Alder Lake" Desktop Processor supports DDR4+DDR5, (only few) PCIe Gen 5 and Dynamic Memory Clock

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,301 (7.52/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Intel will beat AMD to next-generation I/O, with its 12th Generation Core "Alder Lake-S" desktop processors. The company confirmed that the processor will debut both DDR5 memory and PCI-Express Gen 5.0, which double data-rates over current-gen DDR4 and PCI-Express Gen 4, respectively. "Alder Lake-S" features a dual-channel DDR5 memory interface, with data-rates specced to DDR5-4800 MHz, more with overclocking, reaching enthusiast-grade memory attaining speeds in excess of DDR5-7200. Besides speed, DDR5 is expected to herald a doubling in density, with 16 GB single-rank modules becoming a common density-class, 32 GB single-rank being possible in premium modules; and 64 GB dual-rank modules being possible soon. Leading memory manufacturers have started announcing their first DDR5 products in preparation of "Alder Lake-S" launch in Q4-2021.

The memory controller is now able to dynamically adjust memory frequency and voltage, depending on current workload, power budget and other inputs—a first for the PC! This could even mean automatic "Turbo" overclocking for memory. Intel also mentioned "Enhanced Overclocking Support" but didn't go into further detail what that entails. While DDR5 is definitely the cool new kid on the block, Intel's Alder Lake memory controller keeps support for DDR4, and LPDDR4, while adding LPDDR5-5200 support (important for mobile devices). Just to clarify, there won't be one die support DDR5, and another for DDR4, no, all dies will have support for all four of these memory standards. How that will work out for motherboard designs is unknown at this point.



PCI-Express Gen 5.0 is the other big I/O feature. With a bandwidth of 32 Gbps per lane, double that of PCIe Gen 4, the new PCIe Gen 5 will enable a new breed of NVMe SSDs with sequential transfer rates well above 10 GB/s. The desktop dies will feature x16 PCIe Gen 5, and x4 PCIe Gen 4. The PCIe 5.0 x16 can be split into an x8 (for graphics) and 2x x4 (for storage), but it's not possible to run a PCIe 5.0 x16 graphics card at the same time as a PCIe 5.0 SSD. The PCH features up to 12 downstream PCIe Gen4 lanes, and 16 PCIe Gen3 lanes. In any case, we don't expect even the next generation of GPUs, such as RDNA3 or Ada Lovelace, to saturate PCI-Express 4.0 x16. Interestingly, Intel isn't taking advantage of PCI-Express Gen 5 to introduce a new Thunderbolt standard, with 40 Gbps Thunderbolt 4 being mentioned as one of the platform I/O standards for this processor. This could be a very subtle hint that Intel is still facing trouble putting cutting-edge PCIe standards on its chipset-attached PCIe interface. It remains to be seen if the 600-series chipset goes beyond Gen 3. There could, however, be plenty of CPU-attached PCIe Gen 5 connectivity.

View at TechPowerUp Main Site
 
Joined
Jun 19, 2010
Messages
409 (0.08/day)
Location
Germany
Processor Ryzen 5600X
Motherboard MSI A520
Cooling Thermalright ARO-M14 orange
Memory 2x 8GB 3200
Video Card(s) RTX 3050 (ROG Strix Bios)
Storage SATA SSD
Display(s) UltraHD TV
Case Sharkoon AM5 Window red
Audio Device(s) Headset
Power Supply beQuiet 400W
Mouse Mountain Makalu 67
Keyboard MS Sidewinder X4
Software Windows, Vivaldi, Thunderbird, LibreOffice, Games, etc.
Does it use the CPU x4 Gen4 to the Chipset, or is that for NVMe?

 
Last edited:
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Does it use the CPU x4 Gen4 to the Chipset, or is that for NVMe?

Likely NVMe, CPU-to-chipset links are typically called DMI links, despite being essentially just PCIe.


As for this announcement ... I might be getting tired of the never-ending specs race, but is this actually meaningful in any way? Now, I don't generally benchmark beyond testing out new parts and a tad of occasional tuning. So, the experience is what matters. And given that we know that the difference between NVMe and SATA SSDs is near imperceptible, and 3.0 and 4.0 is entirely imperceptible (outside of very niche use cases), what is the actual real-world benefit from PCIe 5.0? DDR5 I can see, at least for iGPU performance and memory bound HPC/datacenter workloads, but PCIe 5.0 for consumers? Nope. Not beyond making motherboards more expensive, likely increasing power consumption, and allowing for the sale of ever more expensive "premium" parts with zero noticeable performance differences compared to previous PCIe generations. GPUs too ... is there a push for moving future mid-range and low-end GPUs to x4 links for space/money/PCB area savings? 'Cause if not, then PCIe 3.0 x16 or 4.0 x8 is sufficient for every GPU out there, and for the people insisting that the >1% performance increase to 4.0 x16 is noticeable, well, that's ubiquitous. PCIe 5.0 won't affect GPU performance for years and years and years.

I guess this would be good for bifurcated SFF builds seeing how you can get more out of a single x16 slot, but that is assuming we get low lane count parts (5.0 x1 SSDs would be interesting). But again that's such a niche use case it's hardly worth mentioning.
 

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
17,776 (2.42/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
Hmmm, early rumours suggested PCIe 5.0 was only going to be for the SSD connected to the CPU, but it seems like those were not true then.

Looks like Intel has finally made a platform with sufficient PCIe lanes for everything that possibly could be crammed onto a consumer motherboard.
 
Joined
Aug 6, 2020
Messages
729 (0.46/day)
Likely NVMe, CPU-to-chipset links are typically called DMI links, despite being essentially just PCIe.


As for this announcement ... I might be getting tired of the never-ending specs race, but is this actually meaningful in any way? Now, I don't generally benchmark beyond testing out new parts and a tad of occasional tuning. So, the experience is what matters. And given that we know that the difference between NVMe and SATA SSDs is near imperceptible, and 3.0 and 4.0 is entirely imperceptible (outside of very niche use cases), what is the actual real-world benefit from PCIe 5.0? DDR5 I can see, at least for iGPU performance and memory bound HPC/datacenter workloads, but PCIe 5.0 for consumers? Nope. Not beyond making motherboards more expensive, likely increasing power consumption, and allowing for the sale of ever more expensive "premium" parts with zero noticeable performance differences compared to previous PCIe generations. GPUs too ... is there a push for moving future mid-range and low-end GPUs to x4 links for space/money/PCB area savings? 'Cause if not, then PCIe 3.0 x16 or 4.0 x8 is sufficient for every GPU out there, and for the people insisting that the >1% performance increase to 4.0 x16 is noticeable, well, that's ubiquitous. PCIe 5.0 won't affect GPU performance for years and years and years.

I guess this would be good for bifurcated SFF builds seeing how you can get more out of a single x16 slot, but that is assuming we get low lane count parts (5.0 x1 SSDs would be interesting). But again that's such a niche use case it's hardly worth mentioning.

Yeah, it's pointless pcie revision hopping for desktop users...but these will really be welcome for server use.

II expect that, because they packed so many new techs into a single chip , they will be lucky to have review units available before Q1 next year.
 
Joined
May 8, 2018
Messages
1,571 (0.65/day)
Location
London, UK
competition is good, I hope intel is hard pressed on gen 5 pcie, ddr5 would come anyway.
 

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
17,776 (2.42/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
Likely NVMe, CPU-to-chipset links are typically called DMI links, despite being essentially just PCIe.


As for this announcement ... I might be getting tired of the never-ending specs race, but is this actually meaningful in any way? Now, I don't generally benchmark beyond testing out new parts and a tad of occasional tuning. So, the experience is what matters. And given that we know that the difference between NVMe and SATA SSDs is near imperceptible, and 3.0 and 4.0 is entirely imperceptible (outside of very niche use cases), what is the actual real-world benefit from PCIe 5.0? DDR5 I can see, at least for iGPU performance and memory bound HPC/datacenter workloads, but PCIe 5.0 for consumers? Nope. Not beyond making motherboards more expensive, likely increasing power consumption, and allowing for the sale of ever more expensive "premium" parts with zero noticeable performance differences compared to previous PCIe generations. GPUs too ... is there a push for moving future mid-range and low-end GPUs to x4 links for space/money/PCB area savings? 'Cause if not, then PCIe 3.0 x16 or 4.0 x8 is sufficient for every GPU out there, and for the people insisting that the >1% performance increase to 4.0 x16 is noticeable, well, that's ubiquitous. PCIe 5.0 won't affect GPU performance for years and years and years.

I guess this would be good for bifurcated SFF builds seeing how you can get more out of a single x16 slot, but that is assuming we get low lane count parts (5.0 x1 SSDs would be interesting). But again that's such a niche use case it's hardly worth mentioning.
The advantage of PCIe 4.0 over 3.0, imho, is that you can use one PCIe 4.0 instead of four PCIe 3.0 for a lot of things. 10Gbps Ethernet is a great example here, where you can make simpler board layouts, using the corresponding parts of course. I think we'll see a PCIe 4.0 x1 or x2 Thunderbolt chip from Intel soon as well and of course, it'll make USB 4 a bit more realistic. Focusing on just storage and graphics is a little bit narrow-minded, as a lot of things use PCIe and there are enough things that needs more than a single PCIe 3.0 lane.

However, I agree with you with regards to PCIe 5.0, since at least as far as we're aware, there's nothing in the pipeline that will make sense for consumers that can benefit from it and possibly even less so when it's only available for the GPU slot. As we're only two generations of GPUs in on PCIe 4.0, it'll most likely take another two or three before there's a move to PCIe 5.0, unless Intel is going to jump the gun...

I thought Intel didn't support bifurcation on its consumer platforms? Maybe that has changed since I last looked.
 
Last edited:
Joined
Nov 7, 2016
Messages
159 (0.05/day)
Processor 5950X
Motherboard Dark Hero
Cooling Custom Loop
Memory Crucial Ballistix 3600MHz CL16
Video Card(s) Gigabyte RTX 3080 Vision
Storage 980 Pro 500GB, 970 Evo Plus 500GB, Crucial MX500 2TB, Crucial MX500 2TB, Samsung 850 Evo 500GB
Display(s) Gigabyte G34WQC
Case Cooler Master C700M
Audio Device(s) Bose
Power Supply AX850
Mouse Razer DeathAdder Chroma
Keyboard MSI GK80
Software W10 Pro
Benchmark Scores CPU-Z Single-Thread: 688 Multi-Thread: 11940
Well, my next build is going to be based gen 3 PCI-E...
 
Joined
Dec 3, 2014
Messages
348 (0.09/day)
Location
Marabá - Pará - Brazil
System Name KarymidoN TitaN
Processor AMD Ryzen 7 5700X
Motherboard ASUS TUF X570
Cooling Custom Watercooling Loop
Memory 2x Kingston FURY RGB 16gb @ 3200mhz 18-20-20-39
Video Card(s) MSI GTX 1070 GAMING X 8GB
Storage Kingston NV2 1TB| 4TB HDD
Display(s) 4X 1080P LG Monitors
Case Aigo Darkflash DLX 4000 MESH
Power Supply Corsair TX 600
Mouse Logitech G300S
really wanna see how thei're gonna cool and feed thoses mobos and processors, 24 Phase VRM 2x24pin cables?

PCI-E Gen4 runs hot and is power hungry, atleast on AMD Plataform.
 

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
17,776 (2.42/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
really wanna see how thei're gonna cool and feed thoses mobos and processors, 24 Phase VRM 2x24pin cables?

PCI-E Gen4 runs hot and is power hungry, atleast on AMD Plataform.
How does an interface run hot and how is an interface power hungry?
The PCIe power spec hasn't changed since version 2.1 when it come to the board connectors.
That devices connected to an interface uses more power and runs hotter has nothing to do with the physical interface.
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
The advantage of PCIe 4.0 over 3.0, imho, is that you can use one PCIe 4.0 instead of four PCIe 3.0 for a lot of things. 10Gbps Ethernet is a great example here, where you can make simpler board layouts, using the corresponding parts of course. I think we'll see a PCIe 4.0 x1 or x2 Thunderbolt chip from Intel soon as well and of course, it'll make USB 4 a bit more realistic. Focusing on just storage and graphics is a little bit narrow-minded, as a lot of things use PCIe and there are enough things that needs more than a single PCIe 3.0 lane.

However, I agree with you with regards to PCIe 5.0, since at least as far as we're aware, there's nothing in the pipeline that will make sense for consumers that can benefit from it and possibly even less so when it's only available for the GPU slot. As we're only two generations of GPUs in on PCIe 4.0, it'll most likely take another two or three before there's a move to PCIe 5.0, unless Intel is going to jump the gun...

I thought Intel didn't support bifurcation on its consumer platforms? Maybe that has changed since I last looked.
That's true, and we really need to simplify motherboard designs to keep costs down given the ballooning costs from all the high speed I/O, though beyond TB controllers (which are integrated in Intel's platforms anyhow, or is that only mobile?) and 10GbE, there aren't many relevant consumer use cases. USB controllers would obviously benefit too, though platforms these days have heaps of integrated USB as well. I would expect AMD to integrate some form of USB4 in their next-gen parts too, though I might be wrong there. Beyond that? Not much, really. Anything else is very niche, or just doesn't need the bandwidth (capture cards, sound cards, storage controllers, what have you). And as you say yourself, PCIe 4.0 would already be perfectly sufficient for this. We're nowhere near actually saturating 4.0 systems in a useful way - though IMO if Intel wanted to make a more useful difference, they'd move their chipset PCIe to 4.0 instead. I'm just hoping that this won't be another €50 motherboard price jump.

As for bifurcation, I haven't really paid that much attention to Intel's platforms in recent years, but at least back in the Z170 era there were a few OEMs that enabled bifurcation as a BIOS option (IIRC ASRock used to be "generous" in that regard). It's needed for boards with x16+x0/x8+x8 PCIe slot layouts after all, and for 2/4 drive m.2 AICs (unless they have PLX chips, which a few do), but that of course doesn't mean it's necessarily available as a user-selected option.

Yeah, it's pointless pcie revision hopping for desktop users...but these will really be welcome for server use.
Oh, yeah, servers want all the bandwidth you can throw at them. That's the main driving force for both DDR5 and PCIe 5.0 AFAIK.
 

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
17,776 (2.42/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
That's true, and we really need to simplify motherboard designs to keep costs down given the ballooning costs from all the high speed I/O, though beyond TB controllers (which are integrated in Intel's platforms anyhow, or is that only mobile?) and 10GbE, there aren't many relevant consumer use cases. USB controllers would obviously benefit too, though platforms these days have heaps of integrated USB as well. I would expect AMD to integrate some form of USB4 in their next-gen parts too, though I might be wrong there. Beyond that? Not much, really. Anything else is very niche, or just doesn't need the bandwidth (capture cards, sound cards, storage controllers, what have you). And as you say yourself, PCIe 4.0 would already be perfectly sufficient for this. We're nowhere near actually saturating 4.0 systems in a useful way - though IMO if Intel wanted to make a more useful difference, they'd move their chipset PCIe to 4.0 instead. I'm just hoping that this won't be another €50 motherboard price jump.

As for bifurcation, I haven't really paid that much attention to Intel's platforms in recent years, but at least back in the Z170 era there were a few OEMs that enabled bifurcation as a BIOS option (IIRC ASRock used to be "generous" in that regard). It's needed for boards with x16+x0/x8+x8 PCIe slot layouts after all, and for 2/4 drive m.2 AICs (unless they have PLX chips, which a few do), but that of course doesn't mean it's necessarily available as a user-selected option.
Well, USB4 is based on Thunderbolt and is meant to be hitting 40Gbps, so it'll need quite a bit of bandwidth and it's obviously not integrated into any known, upcoming chipset. I guess it'll be something that will use PCIe 4.0 from the get go, but I could be wrong. I don't see it being integrated straight away, much in the same way that none of the new USB standards have to date.

We barely have USB 3.2 2x2 (i.e. 20Gbps) support and even the boards that have it, has one or two ports at most.
Again, this is something of a bandwidth issue and the current single port controllers are never going to hit 20Gbps, as they're limited to two PCIe 3.0 lanes, which is 16Gbps of bandwidth. So even here, PCIe 4.0 would bring benefits once the host controller makers move to PCIe 4.0, which might still take a little while.

The rest is mostly niche cases today, like some high-end capture cards that could benefit from PCIe 4.0, by either using fewer PCIe lanes or adding support for more channels and/or higher resolution/bandwidth. However, I don't see most consumers using something like this. In fact, most consumers don't use any of things we're discussing here.

According to the diagram above, it looks like the chipset will get 12 PCIe 4.0 lanes and 16 PCIe 3.0, unless I'm reading that entirely wrong. There has also been leaks/rumours suggesting that some 600-series chipset will have Thunderbolt 4 integrated. and that we'll see DMI 4.0 with the possibility of up to eight lanes connecting with the CPU.

I had a look with regards to bifurcation and Intel is slightly more limited than AMD in this instance it seems, but again, unless you want to run four SSDs from a single x16 slot, it's not going to matter and if you try that on an AMD system, I'm not sure how you're going to drive the display.
 
Joined
Feb 23, 2012
Messages
40 (0.01/day)
How does an interface run hot and how is an interface power hungry?
The PCIe power spec hasn't changed since version 2.1 when it come to the board connectors.
That devices connected to an interface uses more power and runs hotter has nothing to do with the physical interface.
There's a belief that X570 runs hot because of pcie 4.0, despite derb8aur showing basicaly no difference between pcie gen speed or even if there's a nvme ssd or not
The reason would rather be that it's using a repurposed die with a bunch of useless mm² that sucks power.
 

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
17,776 (2.42/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
There's a belief that X570 runs hot because of pcie 4.0, despite derb8aur showing basicaly no difference between pcie gen speed or even if there's a nvme ssd or not
The reason would rather be that it's using a repurposed die with a bunch of useless mm² that sucks power.
Right, but again, that's a chipset issue, not a protocol or interface issue.
And the X570 chipset can apparently run hot, if you're putting heavy load on a pair of PCIe 4.0 NVMe SSDs in RAID.
It's not directly related to PCIe 4.0 as you mention.
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Well, USB4 is based on Thunderbolt and is meant to be hitting 40Gbps, so it'll need quite a bit of bandwidth and it's obviously not integrated into any known, upcoming chipset. I guess it'll be something that will use PCIe 4.0 from the get go, but I could be wrong. I don't see it being integrated straight away, much in the same way that none of the new USB standards have to date.

We barely have USB 3.2 2x2 (i.e. 20Gbps) support and even the boards that have it, has one or two ports at most.
Again, this is something of a bandwidth issue and the current single port controllers are never going to hit 20Gbps, as they're limited to two PCIe 3.0 lanes, which is 16Gbps of bandwidth. So even here, PCIe 4.0 would bring benefits once the host controller makers move to PCIe 4.0, which might still take a little while.

The rest is mostly niche cases today, like some high-end capture cards that could benefit from PCIe 4.0, by either using fewer PCIe lanes or adding support for more channels and/or higher resolution/bandwidth. However, I don't see most consumers using something like this. In fact, most consumers don't use any of things we're discussing here.

According to the diagram above, it looks like the chipset will get 12 PCIe 4.0 lanes and 16 PCIe 3.0, unless I'm reading that entirely wrong. There has also been leaks/rumours suggesting that some 600-series chipset will have Thunderbolt 4 integrated. and that we'll see DMI 4.0 with the possibility of up to eight lanes connecting with the CPU.

I had a look with regards to bifurcation and Intel is slightly more limited than AMD in this instance it seems, but again, unless you want to run four SSDs from a single x16 slot, it's not going to matter and if you try that on an AMD system, I'm not sure how you're going to drive the display.
Yeah, I just read AT's coverage on this, and you're right about the chipset lanes, 16 3.0 + 12 4.0. They also speculate whether the 5.0 lanes might be bifurcatable (ugh, is that a word?) into x8+x4+x4 for 5.0 NVMe storage. I agree that it's a weird configuration otherwise, as 5.0 GPU's won't likely be a thing (or a thing that matters whatsoever) for years. Guess we'll see.

You're right about fast USB ports needing the bandwidth, but (and this might be a controversial opinion): I don't see a reason to add more high speed ports. New standards, like 4.0? Sure, yes, move 3.2g2 ports to 4.0, or add a couple more at the very most. But more than 4 ports in that speed class is just wasteful. Heck, people barely utilize USB 3.0 speeds most of the time, and the number of external 3.2G2x2 devices out there capable of actually utilizing 20Gbps can likely be counted on two hands. Having access to fast I/O is valuable, having tons of it is useless spec fluffing. And it drives up board costs. Heck, with 10-12 USB ports on a board I wouldn't even mind 4 of them being 2.0 - that'd still leave far more fast I/O than 99.99999% of users will ever utilize. 2 fast ports, 4-6 5Gbps ports and a few 2.0 ports is enough for pretty much anyone (front I/O of course adds to this as well).

Of course, Intel apparently still isn't integrating TB into their desktop CPUs or chipsets, so that will still be an optional add-on requiring lanes and complicating board designs (though at least now chipsets have plenty of PCIe to handle that). I was kind of expecting them to add a couple of USB4/TB4 ports given their push for this with TGL, but I guess that's mobile-only.
 
Joined
Mar 21, 2016
Messages
2,508 (0.78/day)
Likely NVMe, CPU-to-chipset links are typically called DMI links, despite being essentially just PCIe.


As for this announcement ... I might be getting tired of the never-ending specs race, but is this actually meaningful in any way? Now, I don't generally benchmark beyond testing out new parts and a tad of occasional tuning. So, the experience is what matters. And given that we know that the difference between NVMe and SATA SSDs is near imperceptible, and 3.0 and 4.0 is entirely imperceptible (outside of very niche use cases), what is the actual real-world benefit from PCIe 5.0? DDR5 I can see, at least for iGPU performance and memory bound HPC/datacenter workloads, but PCIe 5.0 for consumers? Nope. Not beyond making motherboards more expensive, likely increasing power consumption, and allowing for the sale of ever more expensive "premium" parts with zero noticeable performance differences compared to previous PCIe generations. GPUs too ... is there a push for moving future mid-range and low-end GPUs to x4 links for space/money/PCB area savings? 'Cause if not, then PCIe 3.0 x16 or 4.0 x8 is sufficient for every GPU out there, and for the people insisting that the >1% performance increase to 4.0 x16 is noticeable, well, that's ubiquitous. PCIe 5.0 won't affect GPU performance for years and years and years.

I guess this would be good for bifurcated SFF builds seeing how you can get more out of a single x16 slot, but that is assuming we get low lane count parts (5.0 x1 SSDs would be interesting). But again that's such a niche use case it's hardly worth mentioning.
Direct Storage is probably the most prominent thing where PCIe 5.0 is useful for that isn't dominantly just a marketing asterisk.

Well, USB4 is based on Thunderbolt and is meant to be hitting 40Gbps, so it'll need quite a bit of bandwidth and it's obviously not integrated into any known, upcoming chipset. I guess it'll be something that will use PCIe 4.0 from the get go, but I could be wrong. I don't see it being integrated straight away, much in the same way that none of the new USB standards have to date.

This is another good example that's basically CAT8 for USB4.
 
Last edited:
Joined
Dec 3, 2014
Messages
348 (0.09/day)
Location
Marabá - Pará - Brazil
System Name KarymidoN TitaN
Processor AMD Ryzen 7 5700X
Motherboard ASUS TUF X570
Cooling Custom Watercooling Loop
Memory 2x Kingston FURY RGB 16gb @ 3200mhz 18-20-20-39
Video Card(s) MSI GTX 1070 GAMING X 8GB
Storage Kingston NV2 1TB| 4TB HDD
Display(s) 4X 1080P LG Monitors
Case Aigo Darkflash DLX 4000 MESH
Power Supply Corsair TX 600
Mouse Logitech G300S
How does an interface run hot and how is an interface power hungry?
The PCIe power spec hasn't changed since version 2.1 when it come to the board connectors.
That devices connected to an interface uses more power and runs hotter has nothing to do with the physical interface.

the whole reason amd had to put a flippin cooler in their chipset heatsinks for the X570 Chipset was because of PCI-E gen 4... It run Hotter and drew more power because of PCIE gen4
 

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
17,776 (2.42/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
the whole reason amd had to put a flippin cooler in their chipset heatsinks for the X570 Chipset was because of PCI-E gen 4... It run Hotter and drew more power because of PCIE gen4
No, it was not. It's only there for that niche case of running two PCIe 4.0 NVMe drives in raid.
This still had nothing to do with PCIe 4.0, but you're free to choose to not to understand that part.
You need to learn to differentiate between things.
 
Last edited:
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Direct Storage is probably the most prominent thing where PCIe 5.0 is useful for that isn't dominantly just a marketing asterisk.
That assertion has several unsubstantiated assumptions backing it:
- that consumer m.2 PCIe 5.0 SSDs will actually meaningfully outperform 3.0 and 4.0 drives in real world workloads within the lifetime of this platform
- that DS, which is designed around the ~2GBps peak drives in the Xboxes will be able to benefit from significantly higher speeds
- that games will be able to make use of this additional bandwidth

So unless all of these come true, this will be one lf those classic highly marketed features that come with zero tangible benefits.
 

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
17,776 (2.42/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
That assertion has several unsubstantiated assumptions backing it:
- that consumer m.2 PCIe 5.0 SSDs will actually meaningfully outperform 3.0 and 4.0 drives in real world workloads within the lifetime of this platform
- that DS, which is designed around the ~2GBps peak drives in the Xboxes will be able to benefit from significantly higher speeds
- that games will be able to make use of this additional bandwidth

So unless all of these come true, this will be one lf those classic highly marketed features that come with zero tangible benefits.
I believe Direct Storage will bring tangible benefits, when implemented properly, but I think you're right with regards to PCIe 5.0 making no difference whatsoever to Direct Storage.
 
Joined
Feb 24, 2009
Messages
3,516 (0.61/day)
System Name Money Hole
Processor Core i7 970
Motherboard Asus P6T6 WS Revolution
Cooling Noctua UH-D14
Memory 2133Mhz 12GB (3x4GB) Mushkin 998991
Video Card(s) Sapphire Tri-X OC R9 290X
Storage Samsung 1TB 850 Evo
Display(s) 3x Acer KG240A 144hz
Case CM HAF 932
Audio Device(s) ADI (onboard)
Power Supply Enermax Revolution 85+ 1050w
Mouse Logitech G602
Keyboard Logitech G710+
Software Windows 10 Professional x64
The memory controller is now able to dynamically adjust memory frequency and voltage, depending on current workload, power budget and other inputs—a first for the PC!

This is not a first. AMD did this with their Athlon laptop CPU. This was problematic since, even though the DDR clocks changed, the timings did not. So you would get power savings (mainly because there was only one power plane), but with an increase in latency.

Unless Intel has some miracle way of changing timings on the fly, this is a terrible idea because it will only increase latencies.
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I believe Direct Storage will bring tangible benefits, when implemented properly, but I think you're right with regards to PCIe 5.0 making no difference whatsoever to Direct Storage.
Oh, absolutely, I think DS has the potential to be a very major innovation, and I'm really looking forward to seeing it implemented.
 
Joined
Feb 21, 2006
Messages
2,240 (0.33/day)
Location
Toronto, Ontario
System Name The Expanse
Processor AMD Ryzen 7 5800X3D
Motherboard Asus Prime X570-Pro BIOS 5013 AM4 AGESA V2 PI 1.2.0.Cc.
Cooling Corsair H150i Pro
Memory 32GB GSkill Trident RGB DDR4-3200 14-14-14-34-1T (B-Die)
Video Card(s) XFX Radeon RX 7900 XTX Magnetic Air (24.12.1)
Storage WD SN850X 2TB / Corsair MP600 1TB / Samsung 860Evo 1TB x2 Raid 0 / Asus NAS AS1004T V2 20TB
Display(s) LG 34GP83A-B 34 Inch 21: 9 UltraGear Curved QHD (3440 x 1440) 1ms Nano IPS 160Hz
Case Fractal Design Meshify S2
Audio Device(s) Creative X-Fi + Logitech Z-5500 + HS80 Wireless
Power Supply Corsair AX850 Titanium
Mouse Corsair Dark Core RGB SE
Keyboard Corsair K100
Software Windows 10 Pro x64 22H2
Benchmark Scores 3800X https://valid.x86.fr/1zr4a5 5800X https://valid.x86.fr/2dey9c 5800X3D https://valid.x86.fr/b7d
Direct Storage is probably the most prominent thing where PCIe 5.0 is useful for that isn't dominantly just a marketing asterisk.



This is another good example that's basically CAT8 for USB4.
Direct storage works with the new Xbox which is not even using PCIe 4.0 speeds and closer to PCIe 3.0 speed.

I don't see PCIe 5.0 making a difference there at least at the start.
 
Joined
Dec 3, 2014
Messages
348 (0.09/day)
Location
Marabá - Pará - Brazil
System Name KarymidoN TitaN
Processor AMD Ryzen 7 5700X
Motherboard ASUS TUF X570
Cooling Custom Watercooling Loop
Memory 2x Kingston FURY RGB 16gb @ 3200mhz 18-20-20-39
Video Card(s) MSI GTX 1070 GAMING X 8GB
Storage Kingston NV2 1TB| 4TB HDD
Display(s) 4X 1080P LG Monitors
Case Aigo Darkflash DLX 4000 MESH
Power Supply Corsair TX 600
Mouse Logitech G300S
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
:confused::confused::confused:
It needs a fan because it can consume more power than can be dissipated passively (without a large heatsink). That it can doesn't mean that it will in the vast majority of cases, but you can't gamble on that not happening with your specific design. That would lead to issues very quickly - a motherboard maker can't control how people make use of their products. Thus they include fans even if they likely aren't needed in the vast majority of cases. You design for the worst case scenario, or at least the worst reasonable scenario.

The only time I've heard of an X570 board overheating was a crazy dense SFF build where someone crammed a 5950X and RTX 2080Ti into an NFC S4M and cooled them with a single 140mm radiator (yes, apparently that is possible, though it requires a lot of custom fabrication, including a modified server PSU). They had removed the stock chipset heatsink for space savings, replaced it with a small standard chipset heatsink and a slim 40mm fan, but kept having throttling issues due to 100-110°C chipset temps. But that, needless to say, is a rather unusual scenario.
 
Top