Thursday, August 19th 2021

Intel's "Alder Lake" Desktop Processor supports DDR4+DDR5, (only few) PCIe Gen 5 and Dynamic Memory Clock

Intel will beat AMD to next-generation I/O, with its 12th Generation Core "Alder Lake-S" desktop processors. The company confirmed that the processor will debut both DDR5 memory and PCI-Express Gen 5.0, which double data-rates over current-gen DDR4 and PCI-Express Gen 4, respectively. "Alder Lake-S" features a dual-channel DDR5 memory interface, with data-rates specced to DDR5-4800 MHz, more with overclocking, reaching enthusiast-grade memory attaining speeds in excess of DDR5-7200. Besides speed, DDR5 is expected to herald a doubling in density, with 16 GB single-rank modules becoming a common density-class, 32 GB single-rank being possible in premium modules; and 64 GB dual-rank modules being possible soon. Leading memory manufacturers have started announcing their first DDR5 products in preparation of "Alder Lake-S" launch in Q4-2021.

The memory controller is now able to dynamically adjust memory frequency and voltage, depending on current workload, power budget and other inputs—a first for the PC! This could even mean automatic "Turbo" overclocking for memory. Intel also mentioned "Enhanced Overclocking Support" but didn't go into further detail what that entails. While DDR5 is definitely the cool new kid on the block, Intel's Alder Lake memory controller keeps support for DDR4, and LPDDR4, while adding LPDDR5-5200 support (important for mobile devices). Just to clarify, there won't be one die support DDR5, and another for DDR4, no, all dies will have support for all four of these memory standards. How that will work out for motherboard designs is unknown at this point.
PCI-Express Gen 5.0 is the other big I/O feature. With a bandwidth of 32 Gbps per lane, double that of PCIe Gen 4, the new PCIe Gen 5 will enable a new breed of NVMe SSDs with sequential transfer rates well above 10 GB/s. The desktop dies will feature x16 PCIe Gen 5, and x4 PCIe Gen 4. The PCIe 5.0 x16 can be split into an x8 (for graphics) and 2x x4 (for storage), but it's not possible to run a PCIe 5.0 x16 graphics card at the same time as a PCIe 5.0 SSD. The PCH features up to 12 downstream PCIe Gen4 lanes, and 16 PCIe Gen3 lanes. In any case, we don't expect even the next generation of GPUs, such as RDNA3 or Ada Lovelace, to saturate PCI-Express 4.0 x16. Interestingly, Intel isn't taking advantage of PCI-Express Gen 5 to introduce a new Thunderbolt standard, with 40 Gbps Thunderbolt 4 being mentioned as one of the platform I/O standards for this processor. This could be a very subtle hint that Intel is still facing trouble putting cutting-edge PCIe standards on its chipset-attached PCIe interface. It remains to be seen if the 600-series chipset goes beyond Gen 3. There could, however, be plenty of CPU-attached PCIe Gen 5 connectivity.
Add your own comment

24 Comments on Intel's "Alder Lake" Desktop Processor supports DDR4+DDR5, (only few) PCIe Gen 5 and Dynamic Memory Clock

#1
_Flare
Does it use the CPU x4 Gen4 to the Chipset, or is that for NVMe?

Posted on Reply
#2
Valantar
_FlareDoes it use the CPU x4 Gen4 to the Chipset, or is that for NVMe?

Likely NVMe, CPU-to-chipset links are typically called DMI links, despite being essentially just PCIe.


As for this announcement ... I might be getting tired of the never-ending specs race, but is this actually meaningful in any way? Now, I don't generally benchmark beyond testing out new parts and a tad of occasional tuning. So, the experience is what matters. And given that we know that the difference between NVMe and SATA SSDs is near imperceptible, and 3.0 and 4.0 is entirely imperceptible (outside of very niche use cases), what is the actual real-world benefit from PCIe 5.0? DDR5 I can see, at least for iGPU performance and memory bound HPC/datacenter workloads, but PCIe 5.0 for consumers? Nope. Not beyond making motherboards more expensive, likely increasing power consumption, and allowing for the sale of ever more expensive "premium" parts with zero noticeable performance differences compared to previous PCIe generations. GPUs too ... is there a push for moving future mid-range and low-end GPUs to x4 links for space/money/PCB area savings? 'Cause if not, then PCIe 3.0 x16 or 4.0 x8 is sufficient for every GPU out there, and for the people insisting that the >1% performance increase to 4.0 x16 is noticeable, well, that's ubiquitous. PCIe 5.0 won't affect GPU performance for years and years and years.

I guess this would be good for bifurcated SFF builds seeing how you can get more out of a single x16 slot, but that is assuming we get low lane count parts (5.0 x1 SSDs would be interesting). But again that's such a niche use case it's hardly worth mentioning.
Posted on Reply
#3
TheLostSwede
News Editor
Hmmm, early rumours suggested PCIe 5.0 was only going to be for the SSD connected to the CPU, but it seems like those were not true then.

Looks like Intel has finally made a platform with sufficient PCIe lanes for everything that possibly could be crammed onto a consumer motherboard.
Posted on Reply
#4
defaultluser
ValantarLikely NVMe, CPU-to-chipset links are typically called DMI links, despite being essentially just PCIe.


As for this announcement ... I might be getting tired of the never-ending specs race, but is this actually meaningful in any way? Now, I don't generally benchmark beyond testing out new parts and a tad of occasional tuning. So, the experience is what matters. And given that we know that the difference between NVMe and SATA SSDs is near imperceptible, and 3.0 and 4.0 is entirely imperceptible (outside of very niche use cases), what is the actual real-world benefit from PCIe 5.0? DDR5 I can see, at least for iGPU performance and memory bound HPC/datacenter workloads, but PCIe 5.0 for consumers? Nope. Not beyond making motherboards more expensive, likely increasing power consumption, and allowing for the sale of ever more expensive "premium" parts with zero noticeable performance differences compared to previous PCIe generations. GPUs too ... is there a push for moving future mid-range and low-end GPUs to x4 links for space/money/PCB area savings? 'Cause if not, then PCIe 3.0 x16 or 4.0 x8 is sufficient for every GPU out there, and for the people insisting that the >1% performance increase to 4.0 x16 is noticeable, well, that's ubiquitous. PCIe 5.0 won't affect GPU performance for years and years and years.

I guess this would be good for bifurcated SFF builds seeing how you can get more out of a single x16 slot, but that is assuming we get low lane count parts (5.0 x1 SSDs would be interesting). But again that's such a niche use case it's hardly worth mentioning.
Yeah, it's pointless pcie revision hopping for desktop users...but these will really be welcome for server use.

II expect that, because they packed so many new techs into a single chip , they will be lucky to have review units available before Q1 next year.
Posted on Reply
#5
Metroid
competition is good, I hope intel is hard pressed on gen 5 pcie, ddr5 would come anyway.
Posted on Reply
#6
TheLostSwede
News Editor
ValantarLikely NVMe, CPU-to-chipset links are typically called DMI links, despite being essentially just PCIe.


As for this announcement ... I might be getting tired of the never-ending specs race, but is this actually meaningful in any way? Now, I don't generally benchmark beyond testing out new parts and a tad of occasional tuning. So, the experience is what matters. And given that we know that the difference between NVMe and SATA SSDs is near imperceptible, and 3.0 and 4.0 is entirely imperceptible (outside of very niche use cases), what is the actual real-world benefit from PCIe 5.0? DDR5 I can see, at least for iGPU performance and memory bound HPC/datacenter workloads, but PCIe 5.0 for consumers? Nope. Not beyond making motherboards more expensive, likely increasing power consumption, and allowing for the sale of ever more expensive "premium" parts with zero noticeable performance differences compared to previous PCIe generations. GPUs too ... is there a push for moving future mid-range and low-end GPUs to x4 links for space/money/PCB area savings? 'Cause if not, then PCIe 3.0 x16 or 4.0 x8 is sufficient for every GPU out there, and for the people insisting that the >1% performance increase to 4.0 x16 is noticeable, well, that's ubiquitous. PCIe 5.0 won't affect GPU performance for years and years and years.

I guess this would be good for bifurcated SFF builds seeing how you can get more out of a single x16 slot, but that is assuming we get low lane count parts (5.0 x1 SSDs would be interesting). But again that's such a niche use case it's hardly worth mentioning.
The advantage of PCIe 4.0 over 3.0, imho, is that you can use one PCIe 4.0 instead of four PCIe 3.0 for a lot of things. 10Gbps Ethernet is a great example here, where you can make simpler board layouts, using the corresponding parts of course. I think we'll see a PCIe 4.0 x1 or x2 Thunderbolt chip from Intel soon as well and of course, it'll make USB 4 a bit more realistic. Focusing on just storage and graphics is a little bit narrow-minded, as a lot of things use PCIe and there are enough things that needs more than a single PCIe 3.0 lane.

However, I agree with you with regards to PCIe 5.0, since at least as far as we're aware, there's nothing in the pipeline that will make sense for consumers that can benefit from it and possibly even less so when it's only available for the GPU slot. As we're only two generations of GPUs in on PCIe 4.0, it'll most likely take another two or three before there's a move to PCIe 5.0, unless Intel is going to jump the gun...

I thought Intel didn't support bifurcation on its consumer platforms? Maybe that has changed since I last looked.
Posted on Reply
#7
Hossein Almet
Well, my next build is going to be based gen 3 PCI-E...
Posted on Reply
#8
KarymidoN
really wanna see how thei're gonna cool and feed thoses mobos and processors, 24 Phase VRM 2x24pin cables?

PCI-E Gen4 runs hot and is power hungry, atleast on AMD Plataform.
Posted on Reply
#9
TheLostSwede
News Editor
KarymidoNreally wanna see how thei're gonna cool and feed thoses mobos and processors, 24 Phase VRM 2x24pin cables?

PCI-E Gen4 runs hot and is power hungry, atleast on AMD Plataform.
How does an interface run hot and how is an interface power hungry?
The PCIe power spec hasn't changed since version 2.1 when it come to the board connectors.
That devices connected to an interface uses more power and runs hotter has nothing to do with the physical interface.
Posted on Reply
#10
Valantar
TheLostSwedeThe advantage of PCIe 4.0 over 3.0, imho, is that you can use one PCIe 4.0 instead of four PCIe 3.0 for a lot of things. 10Gbps Ethernet is a great example here, where you can make simpler board layouts, using the corresponding parts of course. I think we'll see a PCIe 4.0 x1 or x2 Thunderbolt chip from Intel soon as well and of course, it'll make USB 4 a bit more realistic. Focusing on just storage and graphics is a little bit narrow-minded, as a lot of things use PCIe and there are enough things that needs more than a single PCIe 3.0 lane.

However, I agree with you with regards to PCIe 5.0, since at least as far as we're aware, there's nothing in the pipeline that will make sense for consumers that can benefit from it and possibly even less so when it's only available for the GPU slot. As we're only two generations of GPUs in on PCIe 4.0, it'll most likely take another two or three before there's a move to PCIe 5.0, unless Intel is going to jump the gun...

I thought Intel didn't support bifurcation on its consumer platforms? Maybe that has changed since I last looked.
That's true, and we really need to simplify motherboard designs to keep costs down given the ballooning costs from all the high speed I/O, though beyond TB controllers (which are integrated in Intel's platforms anyhow, or is that only mobile?) and 10GbE, there aren't many relevant consumer use cases. USB controllers would obviously benefit too, though platforms these days have heaps of integrated USB as well. I would expect AMD to integrate some form of USB4 in their next-gen parts too, though I might be wrong there. Beyond that? Not much, really. Anything else is very niche, or just doesn't need the bandwidth (capture cards, sound cards, storage controllers, what have you). And as you say yourself, PCIe 4.0 would already be perfectly sufficient for this. We're nowhere near actually saturating 4.0 systems in a useful way - though IMO if Intel wanted to make a more useful difference, they'd move their chipset PCIe to 4.0 instead. I'm just hoping that this won't be another €50 motherboard price jump.

As for bifurcation, I haven't really paid that much attention to Intel's platforms in recent years, but at least back in the Z170 era there were a few OEMs that enabled bifurcation as a BIOS option (IIRC ASRock used to be "generous" in that regard). It's needed for boards with x16+x0/x8+x8 PCIe slot layouts after all, and for 2/4 drive m.2 AICs (unless they have PLX chips, which a few do), but that of course doesn't mean it's necessarily available as a user-selected option.
defaultluserYeah, it's pointless pcie revision hopping for desktop users...but these will really be welcome for server use.
Oh, yeah, servers want all the bandwidth you can throw at them. That's the main driving force for both DDR5 and PCIe 5.0 AFAIK.
Posted on Reply
#11
TheLostSwede
News Editor
ValantarThat's true, and we really need to simplify motherboard designs to keep costs down given the ballooning costs from all the high speed I/O, though beyond TB controllers (which are integrated in Intel's platforms anyhow, or is that only mobile?) and 10GbE, there aren't many relevant consumer use cases. USB controllers would obviously benefit too, though platforms these days have heaps of integrated USB as well. I would expect AMD to integrate some form of USB4 in their next-gen parts too, though I might be wrong there. Beyond that? Not much, really. Anything else is very niche, or just doesn't need the bandwidth (capture cards, sound cards, storage controllers, what have you). And as you say yourself, PCIe 4.0 would already be perfectly sufficient for this. We're nowhere near actually saturating 4.0 systems in a useful way - though IMO if Intel wanted to make a more useful difference, they'd move their chipset PCIe to 4.0 instead. I'm just hoping that this won't be another €50 motherboard price jump.

As for bifurcation, I haven't really paid that much attention to Intel's platforms in recent years, but at least back in the Z170 era there were a few OEMs that enabled bifurcation as a BIOS option (IIRC ASRock used to be "generous" in that regard). It's needed for boards with x16+x0/x8+x8 PCIe slot layouts after all, and for 2/4 drive m.2 AICs (unless they have PLX chips, which a few do), but that of course doesn't mean it's necessarily available as a user-selected option.
Well, USB4 is based on Thunderbolt and is meant to be hitting 40Gbps, so it'll need quite a bit of bandwidth and it's obviously not integrated into any known, upcoming chipset. I guess it'll be something that will use PCIe 4.0 from the get go, but I could be wrong. I don't see it being integrated straight away, much in the same way that none of the new USB standards have to date.

We barely have USB 3.2 2x2 (i.e. 20Gbps) support and even the boards that have it, has one or two ports at most.
Again, this is something of a bandwidth issue and the current single port controllers are never going to hit 20Gbps, as they're limited to two PCIe 3.0 lanes, which is 16Gbps of bandwidth. So even here, PCIe 4.0 would bring benefits once the host controller makers move to PCIe 4.0, which might still take a little while.

The rest is mostly niche cases today, like some high-end capture cards that could benefit from PCIe 4.0, by either using fewer PCIe lanes or adding support for more channels and/or higher resolution/bandwidth. However, I don't see most consumers using something like this. In fact, most consumers don't use any of things we're discussing here.

According to the diagram above, it looks like the chipset will get 12 PCIe 4.0 lanes and 16 PCIe 3.0, unless I'm reading that entirely wrong. There has also been leaks/rumours suggesting that some 600-series chipset will have Thunderbolt 4 integrated. and that we'll see DMI 4.0 with the possibility of up to eight lanes connecting with the CPU.
videocardz.com/newz/exclusive-intel-12th-gen-core-alder-lake-s-platform-detailed

I had a look with regards to bifurcation and Intel is slightly more limited than AMD in this instance it seems, but again, unless you want to run four SSDs from a single x16 slot, it's not going to matter and if you try that on an AMD system, I'm not sure how you're going to drive the display.
Posted on Reply
#12
Nioktefe
TheLostSwedeHow does an interface run hot and how is an interface power hungry?
The PCIe power spec hasn't changed since version 2.1 when it come to the board connectors.
That devices connected to an interface uses more power and runs hotter has nothing to do with the physical interface.
There's a belief that X570 runs hot because of pcie 4.0, despite derb8aur showing basicaly no difference between pcie gen speed or even if there's a nvme ssd or not
The reason would rather be that it's using a repurposed die with a bunch of useless mm² that sucks power.
Posted on Reply
#13
TheLostSwede
News Editor
NioktefeThere's a belief that X570 runs hot because of pcie 4.0, despite derb8aur showing basicaly no difference between pcie gen speed or even if there's a nvme ssd or not
The reason would rather be that it's using a repurposed die with a bunch of useless mm² that sucks power.
Right, but again, that's a chipset issue, not a protocol or interface issue.
And the X570 chipset can apparently run hot, if you're putting heavy load on a pair of PCIe 4.0 NVMe SSDs in RAID.
It's not directly related to PCIe 4.0 as you mention.
Posted on Reply
#14
Valantar
TheLostSwedeWell, USB4 is based on Thunderbolt and is meant to be hitting 40Gbps, so it'll need quite a bit of bandwidth and it's obviously not integrated into any known, upcoming chipset. I guess it'll be something that will use PCIe 4.0 from the get go, but I could be wrong. I don't see it being integrated straight away, much in the same way that none of the new USB standards have to date.

We barely have USB 3.2 2x2 (i.e. 20Gbps) support and even the boards that have it, has one or two ports at most.
Again, this is something of a bandwidth issue and the current single port controllers are never going to hit 20Gbps, as they're limited to two PCIe 3.0 lanes, which is 16Gbps of bandwidth. So even here, PCIe 4.0 would bring benefits once the host controller makers move to PCIe 4.0, which might still take a little while.

The rest is mostly niche cases today, like some high-end capture cards that could benefit from PCIe 4.0, by either using fewer PCIe lanes or adding support for more channels and/or higher resolution/bandwidth. However, I don't see most consumers using something like this. In fact, most consumers don't use any of things we're discussing here.

According to the diagram above, it looks like the chipset will get 12 PCIe 4.0 lanes and 16 PCIe 3.0, unless I'm reading that entirely wrong. There has also been leaks/rumours suggesting that some 600-series chipset will have Thunderbolt 4 integrated. and that we'll see DMI 4.0 with the possibility of up to eight lanes connecting with the CPU.
videocardz.com/newz/exclusive-intel-12th-gen-core-alder-lake-s-platform-detailed

I had a look with regards to bifurcation and Intel is slightly more limited than AMD in this instance it seems, but again, unless you want to run four SSDs from a single x16 slot, it's not going to matter and if you try that on an AMD system, I'm not sure how you're going to drive the display.
Yeah, I just read AT's coverage on this, and you're right about the chipset lanes, 16 3.0 + 12 4.0. They also speculate whether the 5.0 lanes might be bifurcatable (ugh, is that a word?) into x8+x4+x4 for 5.0 NVMe storage. I agree that it's a weird configuration otherwise, as 5.0 GPU's won't likely be a thing (or a thing that matters whatsoever) for years. Guess we'll see.

You're right about fast USB ports needing the bandwidth, but (and this might be a controversial opinion): I don't see a reason to add more high speed ports. New standards, like 4.0? Sure, yes, move 3.2g2 ports to 4.0, or add a couple more at the very most. But more than 4 ports in that speed class is just wasteful. Heck, people barely utilize USB 3.0 speeds most of the time, and the number of external 3.2G2x2 devices out there capable of actually utilizing 20Gbps can likely be counted on two hands. Having access to fast I/O is valuable, having tons of it is useless spec fluffing. And it drives up board costs. Heck, with 10-12 USB ports on a board I wouldn't even mind 4 of them being 2.0 - that'd still leave far more fast I/O than 99.99999% of users will ever utilize. 2 fast ports, 4-6 5Gbps ports and a few 2.0 ports is enough for pretty much anyone (front I/O of course adds to this as well).

Of course, Intel apparently still isn't integrating TB into their desktop CPUs or chipsets, so that will still be an optional add-on requiring lanes and complicating board designs (though at least now chipsets have plenty of PCIe to handle that). I was kind of expecting them to add a couple of USB4/TB4 ports given their push for this with TGL, but I guess that's mobile-only.
Posted on Reply
#15
InVasMani
ValantarLikely NVMe, CPU-to-chipset links are typically called DMI links, despite being essentially just PCIe.


As for this announcement ... I might be getting tired of the never-ending specs race, but is this actually meaningful in any way? Now, I don't generally benchmark beyond testing out new parts and a tad of occasional tuning. So, the experience is what matters. And given that we know that the difference between NVMe and SATA SSDs is near imperceptible, and 3.0 and 4.0 is entirely imperceptible (outside of very niche use cases), what is the actual real-world benefit from PCIe 5.0? DDR5 I can see, at least for iGPU performance and memory bound HPC/datacenter workloads, but PCIe 5.0 for consumers? Nope. Not beyond making motherboards more expensive, likely increasing power consumption, and allowing for the sale of ever more expensive "premium" parts with zero noticeable performance differences compared to previous PCIe generations. GPUs too ... is there a push for moving future mid-range and low-end GPUs to x4 links for space/money/PCB area savings? 'Cause if not, then PCIe 3.0 x16 or 4.0 x8 is sufficient for every GPU out there, and for the people insisting that the >1% performance increase to 4.0 x16 is noticeable, well, that's ubiquitous. PCIe 5.0 won't affect GPU performance for years and years and years.

I guess this would be good for bifurcated SFF builds seeing how you can get more out of a single x16 slot, but that is assuming we get low lane count parts (5.0 x1 SSDs would be interesting). But again that's such a niche use case it's hardly worth mentioning.
Direct Storage is probably the most prominent thing where PCIe 5.0 is useful for that isn't dominantly just a marketing asterisk.
TheLostSwedeWell, USB4 is based on Thunderbolt and is meant to be hitting 40Gbps, so it'll need quite a bit of bandwidth and it's obviously not integrated into any known, upcoming chipset. I guess it'll be something that will use PCIe 4.0 from the get go, but I could be wrong. I don't see it being integrated straight away, much in the same way that none of the new USB standards have to date.
This is another good example that's basically CAT8 for USB4.
Posted on Reply
#16
KarymidoN
TheLostSwedeHow does an interface run hot and how is an interface power hungry?
The PCIe power spec hasn't changed since version 2.1 when it come to the board connectors.
That devices connected to an interface uses more power and runs hotter has nothing to do with the physical interface.
the whole reason amd had to put a flippin cooler in their chipset heatsinks for the X570 Chipset was because of PCI-E gen 4... It run Hotter and drew more power because of PCIE gen4
Posted on Reply
#17
TheLostSwede
News Editor
KarymidoNthe whole reason amd had to put a flippin cooler in their chipset heatsinks for the X570 Chipset was because of PCI-E gen 4... It run Hotter and drew more power because of PCIE gen4
No, it was not. It's only there for that niche case of running two PCIe 4.0 NVMe drives in raid.
This still had nothing to do with PCIe 4.0, but you're free to choose to not to understand that part.
You need to learn to differentiate between things.
Posted on Reply
#18
Valantar
InVasManiDirect Storage is probably the most prominent thing where PCIe 5.0 is useful for that isn't dominantly just a marketing asterisk.
That assertion has several unsubstantiated assumptions backing it:
- that consumer m.2 PCIe 5.0 SSDs will actually meaningfully outperform 3.0 and 4.0 drives in real world workloads within the lifetime of this platform
- that DS, which is designed around the ~2GBps peak drives in the Xboxes will be able to benefit from significantly higher speeds
- that games will be able to make use of this additional bandwidth

So unless all of these come true, this will be one lf those classic highly marketed features that come with zero tangible benefits.
Posted on Reply
#19
TheLostSwede
News Editor
ValantarThat assertion has several unsubstantiated assumptions backing it:
- that consumer m.2 PCIe 5.0 SSDs will actually meaningfully outperform 3.0 and 4.0 drives in real world workloads within the lifetime of this platform
- that DS, which is designed around the ~2GBps peak drives in the Xboxes will be able to benefit from significantly higher speeds
- that games will be able to make use of this additional bandwidth

So unless all of these come true, this will be one lf those classic highly marketed features that come with zero tangible benefits.
I believe Direct Storage will bring tangible benefits, when implemented properly, but I think you're right with regards to PCIe 5.0 making no difference whatsoever to Direct Storage.
Posted on Reply
#20
mastrdrver
The memory controller is now able to dynamically adjust memory frequency and voltage, depending on current workload, power budget and other inputs—a first for the PC!
This is not a first. AMD did this with their Athlon laptop CPU. This was problematic since, even though the DDR clocks changed, the timings did not. So you would get power savings (mainly because there was only one power plane), but with an increase in latency.

Unless Intel has some miracle way of changing timings on the fly, this is a terrible idea because it will only increase latencies.
Posted on Reply
#21
Valantar
TheLostSwedeI believe Direct Storage will bring tangible benefits, when implemented properly, but I think you're right with regards to PCIe 5.0 making no difference whatsoever to Direct Storage.
Oh, absolutely, I think DS has the potential to be a very major innovation, and I'm really looking forward to seeing it implemented.
Posted on Reply
#22
Makaveli
InVasManiDirect Storage is probably the most prominent thing where PCIe 5.0 is useful for that isn't dominantly just a marketing asterisk.



This is another good example that's basically CAT8 for USB4.
Direct storage works with the new Xbox which is not even using PCIe 4.0 speeds and closer to PCIe 3.0 speed.

I don't see PCIe 5.0 making a difference there at least at the start.
Posted on Reply
#23
KarymidoN
TheLostSwedeNo, it was not. It's only there for that niche case of running two PCIe 4.0 NVMe drives in raid.
This still had nothing to do with PCIe 4.0, but you're free to choose to not to understand that part.
You need to learn to differentiate between things.
www.pcgamesn.com/amd/amd-x570-chipset-fan-nobody-wants-this
:confused::confused::confused:
Posted on Reply
#24
Valantar
KarymidoNwww.pcgamesn.com/amd/amd-x570-chipset-fan-nobody-wants-this
:confused::confused::confused:
It needs a fan because it can consume more power than can be dissipated passively (without a large heatsink). That it can doesn't mean that it will in the vast majority of cases, but you can't gamble on that not happening with your specific design. That would lead to issues very quickly - a motherboard maker can't control how people make use of their products. Thus they include fans even if they likely aren't needed in the vast majority of cases. You design for the worst case scenario, or at least the worst reasonable scenario.

The only time I've heard of an X570 board overheating was a crazy dense SFF build where someone crammed a 5950X and RTX 2080Ti into an NFC S4M and cooled them with a single 140mm radiator (yes, apparently that is possible, though it requires a lot of custom fabrication, including a modified server PSU). They had removed the stock chipset heatsink for space savings, replaced it with a small standard chipset heatsink and a slim 40mm fan, but kept having throttling issues due to 100-110°C chipset temps. But that, needless to say, is a rather unusual scenario.
Posted on Reply
Add your own comment
Dec 23rd, 2024 05:03 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts