Sunday, February 4th 2024

AMD Readies X870E Chipset to Launch Alongside First Ryzen 9000 "Granite Ridge" CPUs

AMD is readying the new 800-series motherboard chipset to launch alongside its next-generation Ryzen 9000 series "Granite Ridge" desktop processors that implement the "Zen 5" microarchitecture. The chipset family will be led by the AMD X870E, a successor to the current X670E. Since AMD isn't changing the CPU socket, and this is very much the same Socket AM5, the 800-series chipset will support not just "Granite Ridge" at launch, but also the Ryzen 7000 series "Raphael," and Ryzen 8000 series "Hawk Point." Moore's Law is Dead goes into the details of what sets the X870E apart from the current X670E, and it all has to do with USB4.

Apparently, motherboard manufacturers will be mandated to include 40 Gbps USB4 connectivity with AMD X870E, which essentially makes the chipset a 3-chip solution—two Promontory 21 bridge chips, and a discrete ASMedia ASM4242 USB4 host controller; although it's possible that AMD's QVL will allow other brands of USB4 controllers as they become available. The Ryzen 9000 series "Granite Ridge" are chiplet based processors just like the Ryzen 7000 "Raphael," and while the 4 nm "Zen 5" CCDs are new, the 6 nm client I/O die (cIOD) is largely carried over from "Raphael," with a few updates to its memory controller. DDR5-6400 will be the new AMD-recommended "sweetspot" speed; although AMD might get its motherboard vendors to support DDR5-8000 EXPO profiles with an FCLK of 2400 MHz, and a divider.
The Ryzen 9000 series "Granite Ridge" will launch alongside a new wave of AMD X870E motherboards, although these processors very much will be supported on AMD 600-series chipset motherboards with BIOS updates. The vast majority of Socket AM5 motherboards feature USB BIOS Flashback, and so you could even pick up a 600-series chipset motherboard with a Ryzen 9000 series processor in combos. The company might expand the 800-series with other chipset models, such as the X870, B850, and the new B840 in the entry level.
Sources: Moore's Law is Dead (YouTube), Tweaktown
Add your own comment

220 Comments on AMD Readies X870E Chipset to Launch Alongside First Ryzen 9000 "Granite Ridge" CPUs

#126
Octavean
Tek-CheckIt's not about high-end but about more versatile connectivity for those who need it. Both X670 and B650 chipsets could be on high-end, mid-range or entry platforms, depending on implemented features and quality of design too.

If you compare those the two chipsets, let's take two Taichi boards, you will see more connectivity options on PCIe, NVMe, SATA, USB4, etc on X670. If I do not need more connectivity, I'd buy B650. If I need more, I'd buy X670. Price, of course, also plays role.
Exactly my point.

As I said before, I usually buy higher end boards and enjoy the increase connectivity options as well as increased fit and finish. Before the ASRock B650E PG Riptide WiFi (with RyZen 7950X) my daily driver was an ASRock X570 Taichi (with RyZen 3950X) and the difference between the two is noteworthy IMO.
Posted on Reply
#127
big_mac
TigerfoxThere are only two AM5-boards with two x4-Slots from PCH, very few with one x4 and one x1 and none with two x4 and one x1.
ASUS X670-P says hello ;)
Posted on Reply
#128
Tigerfox
big_macASUS X670-P says hello
Yes, forgot about that, cheap little brother of the Prime X670E-Pro I chose. But the X670-P only hast Gen4x16, a crap Realtek codec (it doesn't even specify which one, so it must be really crappy) and the x1 is shared with 2xSATA (don't really know, how that works, normally one Gen3x1 is shared with 1 SATA-Port).
It would have been better suited to my current use case (Xonar Xense and 6xSATA) but not for future.

I really don't understand why they made one M.2 in my X670E-Pro with Gen3x4 or SATA, two lanes of that shared with two SATA-Ports. Why only two lanes? Why not give the board 6xSATA and make all four lanes of that M.2 shared? Because switch-ICs are that expensive?

Also, since up to very recently I had to depend on WiFi, I thought it was neat to have WiFi integrated in the I/O-panel, even though it was complicated swapping the wifi-NIC, but now I see it as just a waste of one Gen3-lane.
Posted on Reply
#129
Tek-Check
TigerfoxYes, forgot about that, cheap little brother of the Prime X670E-Pro I chose. But the X670-P only hast Gen4x16, a crap Realtek codec (it doesn't even specify which one, so it must be really crappy) and the x1 is shared with 2xSATA (don't really know, how that works, normally one Gen3x1 is shared with 1 SATA-Port).
It would have been better suited to my current use case (Xonar Xense and 6xSATA) but not for future.

I really don't understand why they made one M.2 in my X670E-Pro with Gen3x4 or SATA, two lanes of that shared with two SATA-Ports. Why only two lanes? Why not give the board 6xSATA and make all four lanes of that M.2 shared? Because switch-ICs are that expensive?

Also, since up to very recently I had to depend on WiFi, I thought it was neat to have WiFi integrated in the I/O-panel, even though it was complicated swapping the wifi-NIC, but now I see it as just a waste of one Gen3-lane.
- this board uses almost all, if not all, available connectivity
- all Gen4 lanes are busy, so you can't have more Gen4 lanes available for any devices
- some lanes are shared. On the chipset, PCIe 3.0 and SATA could be shared as those are provided on the same PHY root complex
- one Gen3/SATA lane from chipset 1 is not counted. It disappeared somewhere...
- there is something confusing on this board, namely Q-SW switch chip; it is fed by both chipsets, Gen3 x1 from PT21_1 and Gen3 x2 from PT21_2
- below it, there is M.2_2 slot, fed by two x2 Gen3 lanes from both chipsets
- when M.2 Gen3 runs in x4 mode, two Gen3 lanes from chipset 2 are not available for Q-SW and so two SATA ports do not work, Asus says
- what is confusing to me is that single Gen3 x1 lane from chipset 1 still feeds Q-SW, and there is another unaccounted x1 Gen3 lane on chipset 1. I cannot see the reason for those two SATA to be cut-off even if M.2 gets x4 bandwidth. Weird, unless Q-SW needs x1 lane to function beyond data transmission.
Here is a diagram for this board
Posted on Reply
#130
529th
Remove the second CPU PCIe slot and give us a second CPU USB hub to split our high polling input devices on, or something worth while. No one uses that 8 electrical PCIe slot anyways. Who wants to have their GPU running in 8x for gaming? I've tried it before and the quality diminishes. It's been the most useless waisted slot in history
Posted on Reply
#131
AusWolf
529thRemove the second CPU PCIe slot and give us a second CPU USB hub to split our high polling input devices on, or something worth while. No one uses that 8 electrical PCIe slot anyways. Who wants to have their GPU running in 8x for gaming? I've tried it before and the quality diminishes. It's been the most useless waisted slot in history
Because that slot doesn't get its lanes from the CPU. I agree that it's kind of useless, though.
Posted on Reply
#132
529th
AusWolfBecause that slot doesn't get its lanes from the CPU. I agree that it's kind of useless, though.
Because I was talking about boards that split the CPU lanes into 08x /08x AM4 offered 24 PCIe lanes, AM5 offers 28 PCIe lanes. Chipsets and board vendors mix up the whole bag but I'm not so sure there is much of a market that needs those first direct 16 PCIe CPU lanes split over another 16x 8 electrical slot when most of the time there is a 4x pcie slot from the chipset and a lot of connectivity offered on the back I/O I mean who needs firewire or scsi pcie cards anymore? What else could be used in those slots? Sound card? Usually used in the 4x slot. Ugh, it just kills me sometimes when I think about it
Posted on Reply
#133
AusWolf
529thBecause I was talking about boards that split the CPU lanes into 08x /08x AM4 offered 24 PCIe lanes, AM5 offers 28 PCIe lanes. Chipsets and board vendors mix up the whole bag but I'm not so sure there is much of a market that needs those first direct 16 PCIe CPU lanes split over another 16x 8 electrical slot when most of the time there is a 4x pcie slot from the chipset and a lot of connectivity offered on the back I/O I mean who needs firewire or scsi pcie cards anymore? What else could be used in those slots? Sound card? Usually used in the 4x slot. Ugh, it just kills me sometimes when I think about it
Then don't use the X4 slot.

Most AM5 boards have the CPU use one x16 for the GPU, two x4s for nvme, and everything else comes from the chipset.
Posted on Reply
#134
kapone32
529thRemove the second CPU PCIe slot and give us a second CPU USB hub to split our high polling input devices on, or something worth while. No one uses that 8 electrical PCIe slot anyways. Who wants to have their GPU running in 8x for gaming? I've tried it before and the quality diminishes. It's been the most useless waisted slot in history
That might fit your use case and there are boards that already do that. Just get an X670 board. X670E boards are marketed for exactly what you are complaining about. Some of these X670E boards also come with Expansion cards for RAID 0 support. When you talk about x8 you have to remember that it is 5.0 that would mean the same bandwidth as 4.0 x16. There is no GPU on the market that can saturate that so there is no worry. If you don't want the PCIe expansion get X670 or B650E.
Posted on Reply
#135
529th
This thread is about X870e
Posted on Reply
#136
kapone32
529thThis thread is about X870e
If it is X870E don't expect them not to have 2nd PCIe slots. That is a selling feature of that level of board.
Posted on Reply
#137
529th
The best scenario would be

16x pcie lanes from the CPU for the GPU

4x pcie lanes from the CPU for nvme

4x pcie lanes from the cpu for USB

4x pcie lanes from the CPU dedicated to a 4x slot and not on a switch

4x pcie lanes from the chipset in a 4x slot

nvme storage drives can be hosted on the chipset

the problem is the wasted board relestate with the 2nd 16x slot 8 electircal swithced from the 16x GPU CPU lanes

That would be a proper high end chipset board
Posted on Reply
#138
AusWolf
529th16x pcie lanes from the CPU for the GPU

4x pcie lanes from the CPU for nvme
You've got that.
529th4x pcie lanes from the cpu for USB
The CPU already has lots of USB, you don't need an x4 lane allocated to that.
529th4x pcie lanes from the CPU dedicated to a 4x slot and not on a switch
Then you'd sacrifice the second m.2 slot. If you're happy with that, then that could probably be an option, although, expansion cards work through the chipset lanes just fine. I'm not sure why you need them to be routed directly from the CPU on the expense of an m.2 that can actually benefit from better latency.
529th4x pcie lanes from the chipset in a 4x slot
You have a physical x16 slot doing that on most boards. What advantage does a smaller slot give you?
529thnvme storage drives can be hosted on the chipset
nvme is exactly where you need them to be routed from the CPU for better latency. What other device could benefit more?
Posted on Reply
#139
529th
I understand this is all subjective but PC gaming is a HUGE market so let's get into this anyways. My opinions are based around optimal gaming connectivity. Back to the extra 8x electrical slot, it's a moot point because it really doesn't take away the total amount of CPU lanes overall but it does take away board real-estate and that still sucks. It's not gaming orientated anymore and a waisted add-on. No one is running SLI or Crossfire, I even forgot what they used to be called, lol. Of course it could be useful for storage to run off a card in that slot but imo your primary cpu nvme m.2 is enough in that regard. It's better to have a second dedicated pcie slot for a card on CPU lanes for reduced latency in gaming: gaming needs to be as close to real time as possible than storage drives. Whether it's a sound card on that slot or a sperate USB hub to seperate input periphreals then again that's better than storage drives taking priority when in fact they are ... storage drives. Again, gaming latency takes priority so IMO storage drives don't need that reduced latency. In my experience reducing the 16x GPU lanes to 8x to share with a pcie card brought a noticeable degradation in performance. That was years ago with my 3800X. If I could try this again I would but thanks to ridiculously large size upper tier GPU sizes I can't right now. My ideas might be caught in an atiqutated era so feel free to correct me if I am wrong. What's the gaming market worth now? 29 billion last I heard?
Posted on Reply
#140
kapone32
529thI understand this is all subjective but PC gaming is a HUGE market so let's get into this anyways. My opinions are based around optimal gaming connectivity. Back to the extra 8x electrical slot, it's a moot point because it really doesn't take away the total amount of CPU lanes overall but it does take away board real-estate and that still sucks. It's not gaming orientated anymore and a waisted add-on. No one is running SLI or Crossfire, I even forgot what they used to be called, lol. Of course it could be useful for storage to run off a card in that slot but imo your primary cpu nvme m.2 is enough in that regard. It's better to have a second dedicated pcie slot for a card on CPU lanes for reduced latency in gaming: gaming needs to be as close to real time as possible than storage drives. Whether it's a sound card on that slot or a sperate USB hub to seperate input periphreals then again that's better than storage drives taking priority when in fact they are ... storage drives. Again, gaming latency takes priority so IMO storage drives don't need that reduced latency. In my experience reducing the 16x GPU lanes to 8x to share with a pcie card brought a noticeable degradation in performance. That was years ago with my 3800X. If I could try this again I would but thanks to ridiculously large size upper tier GPU sizes I can't right now. My ideas might be caught in an atiqutated era so feel free to correct me if I am wrong. What's the gaming market worth now? 29 billion last I heard?
The only reason Crossfire and SLI were quashed is because both vendors were losing money supporting it. Even with that high end boards still display Crossfire support on AM4. If you think that 1 fast drive is enough, you have not seen how huge Games are today. That expansion could be used like I do with a RAID 0 NMVE drive. Storage is the new thing and NVME is here to stay. You may want more USB but buying a USB extender that plugs into the USB C port is more than enough. There is also the fact that a USB 3.0 card would be added to the x4 slot. Of course you are free to buy a B650 or B750 board that has focused on USB ports. When you talk about Gaming latency if what you were saying was true we would all still be rocking HDDs and NVME would not be as popular as it is. i am also sure there is a board out there that will satisfy your needs.
Posted on Reply
#141
529th
I thought the reason why sli and crossfire sucked was because it added latency like a frame or something no matter how it was optimized, and that was the real reason behind the floundering sales; better to put the resources towards fixing bugs in drivers than sli or crossfire.

In what metrics does RAID 0 NVME drive benefit gaming latency? If it's purely drive space then I'd need more convincing when there are 4Tb drives on the market. On a side note I was running a scsi drive pre CS beta era (was more of a DOD player), maybe 2001 or 2002 ,, might of been a 15k 36g fujitsu or something so the idea of fast drive interface intrigues, please enlighten us.

In the current X670e market the only boards you can get with a 3rd PCIe CPU slot which is 4x electrical is the Meg Ace or Godlike. How are you running your setup? Off the GPU lanes on a switch?

Separating input peripherals on different USB CPU hubs is optimal vs both on 1 USB CPU hub. If any input device is off the chipset USB port it's not optimal. Of course higher polling rates adversely effect "PC latency" but unless you are on a HEDT like Threadripper in the end there will be some form of I/O used for gaming coming from the chipset, but that expanded dedicated CPU slot option is king. You could run your expansion card through a 4x electrical slot on one of those, they are full16x slots.

lol what are you talking about,,, gaming latency and being on HDDs
Posted on Reply
#142
kapone32
529thI thought the reason why sli and crossfire sucked was because it added latency like a frame or something no matter how it was optimized, and that was the real reason behind the floundering sales; better to put the resources towards fixing bugs in drivers than sli or crossfire.

In what metrics does RAID 0 NVME drive benefit gaming latency? If it's purely drive space then I'd need more convincing when there are 4Tb drives on the market. On a side note I was running a scsi drive pre CS beta era (was more of a DOD player), maybe 2001 or 2002 ,, might of been a 15k 36g fujitsu or something so the idea of fast drive interface intrigues, please enlighten us.

In the current X670e market the only boards you can get with a 3rd PCIe CPU slot which is 4x electrical is the Meg Ace or Godlike. How are you running your setup? Off the GPU lanes on a switch?

Separating input peripherals on different USB CPU hubs is optimal vs both on 1 USB CPU hub. If any input device is off the chipset USB port it's not optimal. Of course higher polling rates adversely effect "PC latency" but unless you are on a HEDT like Threadripper in the end there will be some form of I/O used for gaming coming from the chipset, but that expanded dedicated CPU slot option is king. You could run your expansion card through a 4x electrical slot on one of those, they are full16x slots.

lol what are you talking about,,, gaming latency and being on HDDs
You have not investigated enough. I have the X670E E Strix. That has 2 5.0 drives in it. I can add 2 more. Instead though I put in a WD AN1500 that can fit in any slot that I modded from 1 TB to 4TB. That sits in the 3rd slot. The 2nd slot has a 4TB NV2. I also have 2 SP 4.0 2TB drives in RAID 0. My 2 5.0 drives are a 1TB MP700 that is boot and a 2TB MP700 that is a Game drive. I also have a 8TB SDD and a 4TB RAID 0 SSD array. The example I will use is Strategy Games like TW. Where you will love NAND is moving Games around. You know like when I uprgraded to Win 11 and the 100+ Epic Games asked to be re-downloaded. Using multiple drives let me see that Windows 10 has a 1.9 GB/s max write rate and Win 11 is about 2.9 GB/s. The latency you describe in the way my board is wired overcomes that. You see everything is connected to the CPU even the chipset. Even with all of that Just look at the rear I/O for your USB want



Posted on Reply
#143
dgianstefani
TPU Proofreader
529thI thought the reason why sli and crossfire sucked was because it added latency like a frame or something no matter how it was optimized, and that was the real reason behind the floundering sales; better to put the resources towards fixing bugs in drivers than sli or crossfire.

In what metrics does RAID 0 NVME drive benefit gaming latency? If it's purely drive space then I'd need more convincing when there are 4Tb drives on the market. On a side note I was running a scsi drive pre CS beta era (was more of a DOD player), maybe 2001 or 2002 ,, might of been a 15k 36g fujitsu or something so the idea of fast drive interface intrigues, please enlighten us.

In the current X670e market the only boards you can get with a 3rd PCIe CPU slot which is 4x electrical is the Meg Ace or Godlike. How are you running your setup? Off the GPU lanes on a switch?

Separating input peripherals on different USB CPU hubs is optimal vs both on 1 USB CPU hub. If any input device is off the chipset USB port it's not optimal. Of course higher polling rates adversely effect "PC latency" but unless you are on a HEDT like Threadripper in the end there will be some form of I/O used for gaming coming from the chipset, but that expanded dedicated CPU slot option is king. You could run your expansion card through a 4x electrical slot on one of those, they are full16x slots.

lol what are you talking about,,, gaming latency and being on HDDs
Raid 0 typically has worse latency but better bandwidth and a higher chance of failure. He doesn't know what he's talking about. But then he also associates dual CCD Zen having higher write bandwidth "feeding the GPU" as being "smoother". Despite all evidence to the contrary in testing (lower FPS), and the fact that higher write bandwidth is simply because there's two connections to the IF/memory controller, one from each CCD. But this does not benefit games because if a game was running on two CCDs, it would have massive stuttering from the more than quadrupled core to core latency. Hence why AMD wrote a driver specifically to prevent this.

SLI/Crossfire does indeed suck. The frame pacing was never resolved.

Apparently Windows 10 has a write limit of 2 GB/s, and 11 is 3 GB/s. I learn something new every day /s :laugh: :laugh: :laugh:
Posted on Reply
#144
529th
Good chat guys, good chat. Hope everyone has a great holiday weekend if you celebrate Easter. I'm feeling fat cause I ate too many stella dora cookies, lol
Posted on Reply
#145
kapone32
dgianstefaniRaid 0 typically has worse latency but better bandwidth and a higher chance of failure. He doesn't know what he's talking about. But then he also associates dual CCD Zen having higher write bandwidth "feeding the GPU" as being "smoother". Despite all evidence to the contrary in testing (lower FPS), and the fact that higher write bandwidth is simply because there's two connections to the IF/memory controller, one from each CCD. But this does not benefit games because if a game was running on two CCDs, it would have massive stuttering from the more than quadrupled core to core latency. Hence why AMD wrote a driver specifically to prevent this.

SLI/Crossfire does indeed suck. The frame pacing was never resolved.

Apparently Windows 10 has a write limit of 2 GB/s, and 11 is 3 GB/s. I learn something new every day /s :laugh: :laugh: :laugh:
Yes that argument held true for HDDs in 2012. I have been running RAID 0 with SSDs for over a decade and never had a drive fail. You think AMD wrote software to compromise their chip? You don't even have a 7900X3D to substantiate your claim. Just like when you purported that the 7800X3D is faster in every Game (Using 12 Games). By the way these are all at 4K. I just finished downloading Afterimage so you can explain to me how 292 FPS is bad. What you don't get about the 7900X3D is that when there is no Vcache support you get 6 cores running at 5.6 Ghz.


As far as SLI and Crossfire all I will say is that I made sure I got Games that supported Crossfire based on the Wiki page. So that meant that Sleeping Dogs ran over 200 FPS and TW Rome 2 doubled in FPS. Did you have Polaris cards in crossfire that was at the driver level and needed no bridge? As most of us were using 60hz panels (Unless you had one of those Korean that depended on the GPU for scaling) that led to most of the issues you describe. If SLI sucked why did Nvidia disable it on the GTS 450 without telling anyone?



You can make fun of me all you want all I am talking about is my real world experience. Do you have all of the different types of Drives I have? Do you know how SSD RAID 0 is better than NVME RAID 0? You don't get the drop off that you get with NVME. I guess you will tell me that after using a 5.0 drive for so long that there is no tangible difference between it and a high end 4.0 like the Seagate 530 2TB for daily use.
Posted on Reply
#146
AusWolf
529thI understand this is all subjective but PC gaming is a HUGE market so let's get into this anyways. My opinions are based around optimal gaming connectivity. Back to the extra 8x electrical slot, it's a moot point because it really doesn't take away the total amount of CPU lanes overall but it does take away board real-estate and that still sucks. It's not gaming orientated anymore and a waisted add-on. No one is running SLI or Crossfire, I even forgot what they used to be called, lol. Of course it could be useful for storage to run off a card in that slot but imo your primary cpu nvme m.2 is enough in that regard. It's better to have a second dedicated pcie slot for a card on CPU lanes for reduced latency in gaming: gaming needs to be as close to real time as possible than storage drives. Whether it's a sound card on that slot or a sperate USB hub to seperate input periphreals then again that's better than storage drives taking priority when in fact they are ... storage drives. Again, gaming latency takes priority so IMO storage drives don't need that reduced latency. In my experience reducing the 16x GPU lanes to 8x to share with a pcie card brought a noticeable degradation in performance. That was years ago with my 3800X. If I could try this again I would but thanks to ridiculously large size upper tier GPU sizes I can't right now. My ideas might be caught in an atiqutated era so feel free to correct me if I am wrong. What's the gaming market worth now? 29 billion last I heard?
I still don't see why you would need your secondary PCI-e slot to run from the CPU.

Sound card? Only a handful of audiophiles still use them. Graphics cards being able to process audio basically killed this market.

USB controllers for better input latency? You have a crapton of USB ports already running from the CPU, so how many more keyboards and mice do you want to connect?

Personally, I'm happy with the way things are. A separate OS and game drive is a must for me. In case of an OS drive failure, I can just reinstall it without having to wait 2-3 days for my game library to download. Yes, my internet connection is really that bad. And in case of a game drive failure, I can still use my PC as normal while my game library downloads again onto the new drive.
Posted on Reply
#147
Tigerfox
529thRemove the second CPU PCIe slot and give us a second CPU USB hub to split our high polling input devices on, or something worth while. No one uses that 8 electrical PCIe slot anyways. Who wants to have their GPU running in 8x for gaming? I've tried it before and the quality diminishes. It's been the most useless waisted slot in history
I vote for the opposite, make a second x16-slot from CPU (electrical x8) standard, better even two (x8/x8/x4)
529thBecause I was talking about boards that split the CPU lanes into 08x /08x AM4 offered 24 PCIe lanes, AM5 offers 28 PCIe lanes. Chipsets and board vendors mix up the whole bag but I'm not so sure there is much of a market that needs those first direct 16 PCIe CPU lanes split over another 16x 8 electrical slot when most of the time there is a 4x pcie slot from the chipset and a lot of connectivity offered on the back I/O I mean who needs firewire or scsi pcie cards anymore? What else could be used in those slots? Sound card? Usually used in the 4x slot. Ugh, it just kills me sometimes when I think about it
There could be a market. AM5 is better suited for budget workstations than any other mainstream-plattforms because of 16-core CPU (even more cores in the future?), ECC-RAM and lots of PCIe5.0. You can use a second Gen5x8-slot to put the VGA into, then use a x16-to-4xM.2-Adapter for Gen5-SSD, or you can use the iGPU, 2xM.2 Gen5 onboard and the 4xM.2-Adapter for 6xM.2 Gen5 total
529thThe best scenario would be

16x pcie lanes from the CPU for the GPU

4x pcie lanes from the CPU for nvme

4x pcie lanes from the cpu for USB

4x pcie lanes from the CPU dedicated to a 4x slot and not on a switch

4x pcie lanes from the chipset in a 4x slot

nvme storage drives can be hosted on the chipset

the problem is the wasted board relestate with the 2nd 16x slot 8 electircal swithced from the 16x GPU CPU lanes

That would be a proper high end chipset board
Why do you want 4xPCIe for USB? Thinking of USB4? That only uses Gen4x4, so 4 lanes from PCH would be enough. Only USB4v2/TB5 with asymmetrical 120/40GB/s will need Gen5x4.

Ideal for me would be:

1x16/x8 Gen5 from CPU for GPU, 4xM.2 or 200GbE

1x8/x0 Gen5 from CPU for GPU, 2xM.2 or 100GbE

1-2x4 Gen5 from CPU, shared with 1-2xM.2, for NVMe, USB4v2/TB5 (or USB4/TB4 atm), 25/50GbE, SAS-RAID-Controller

2x4 Gen4 from PCH, shared with M.2 for 10GbE, USB4/TB4, SATA-Controller, HDMI-Video-Capture

2-3x1 Gen4 for Soundcard, TV-Tuner, SATA

X870E should atleast have 5GbE + WiFi7, but can have 10GbE (Marvell Aqtion AQC113C can use 1xGen4 or 2xGen3)
529thI understand this is all subjective but PC gaming is a HUGE market so let's get into this anyways. My opinions are based around optimal gaming connectivity. Back to the extra 8x electrical slot, it's a moot point because it really doesn't take away the total amount of CPU lanes overall but it does take away board real-estate and that still sucks. It's not gaming orientated anymore and a waisted add-on. No one is running SLI or Crossfire, I even forgot what they used to be called, lol. Of course it could be useful for storage to run off a card in that slot but imo your primary cpu nvme m.2 is enough in that regard. It's better to have a second dedicated pcie slot for a card on CPU lanes for reduced latency in gaming: gaming needs to be as close to real time as possible than storage drives. Whether it's a sound card on that slot or a sperate USB hub to seperate input periphreals then again that's better than storage drives taking priority when in fact they are ... storage drives. Again, gaming latency takes priority so IMO storage drives don't need that reduced latency. In my experience reducing the 16x GPU lanes to 8x to share with a pcie card brought a noticeable degradation in performance. That was years ago with my 3800X. If I could try this again I would but thanks to ridiculously large size upper tier GPU sizes I can't right now. My ideas might be caught in an atiqutated era so feel free to correct me if I am wrong. What's the gaming market worth now? 29 billion last I heard?
SLI/Crossfire are dead and were never really useful. Gaming doesn't neet anything. You need a Gen4x16-Slot (next gen GPUs will use Gen5, but it won't make any difference with 16 Lanes) and one M.2-slot. Again, Gen3, 4 or 5 doesn't make any difference for games. As doesn't RAM-clock. Wake up, you only need 7800X3D (or 9800X3D), enough RAM, one NVMe and the rest of the budget should go into GPU.
AusWolfI still don't see why you would need your secondary PCI-e slot to run from the CPU.

Sound card? Only a handful of audiophiles still use them. Graphics cards being able to process audio basically killed this market.
What are you talking about? GPUs processing audio? Only thing they can do with audio is outputting digital oudio via HDMI or DP. The days of PCIe-soundcards might be over in facour of USB, both only with audiophiles because onboard codecs on good boards are really good (ALC1220/4080/4082, not that ALC897-Crap), but every good pcie soundcard that anyone ever bought will allways be better than todays onboard codecs.
Posted on Reply
#148
dgianstefani
TPU Proofreader
TigerfoxI vote for the opposite, make a second x16-slot from CPU (electrical x8) standard, better even two (x8/x8/x4)


There could be a market. AM5 is better suited for budget workstations than any other mainstream-plattforms because of 16-core CPU (even more cores in the future?), ECC-RAM and lots of PCIe5.0. You can use a second Gen5x8-slot to put the VGA into, then use a x16-to-4xM.2-Adapter for Gen5-SSD, or you can use the iGPU, 2xM.2 Gen5 onboard and the 4xM.2-Adapter for 6xM.2 Gen5 total

Why do you want 4xPCIe for USB? Thinking of USB4? That only uses Gen4x4, so 4 lanes from PCH would be enough. Only USB4v2/TB5 with asymmetrical 120/40GB/s will need Gen5x4.

Ideal for me would be:

1x16/x8 Gen5 from CPU for GPU, 4xM.2 or 200GbE

1x8/x0 Gen5 from CPU for GPU, 2xM.2 or 100GbE

1-2x4 Gen5 from CPU, shared with 1-2xM.2, for NVMe, USB4v2/TB5 (or USB4/TB4 atm), 25/50GbE, SAS-RAID-Controller

2x4 Gen4 from PCH, shared with M.2 for 10GbE, USB4/TB4, SATA-Controller, HDMI-Video-Capture

2-3x1 Gen4 for Soundcard, TV-Tuner, SATA

X870E should atleast have 5GbE + WiFi7, but can have 10GbE (Marvell Aqtion AQC113C can use 1xGen4 or 2xGen3)

SLI/Crossfire are dead and were never really useful. Gaming doesn't neet anything. You need a Gen4x16-Slot (next gen GPUs will use Gen5, but it won't make any difference with 16 Lanes) and one M.2-slot. Again, Gen3, 4 or 5 doesn't make any difference for games. As doesn't RAM-clock. Wake up, you only need 7800X3D (or 9800X3D), enough RAM, one NVMe and the rest of the budget should go into GPU.
Everything else is true, but RAM clock/latency matters significantly with games, even with X3D.

I have personally tested this.

Sure, if you're running low refresh rate RAM speed won't matter as much, but with 165 Hz+ averages go up, as do minimums with faster RAM tune, so you can still see benefit at 120 Hz or even 60 Hz. Getting sub 60 ns is important for either Intel/AMD. Peak tune is around 55 ns for AMD and 45 ns for Intel. Bandwidth also matters for some games, but latency matters more for most.
Posted on Reply
#149
AusWolf
dgianstefaniEverything else is true, but RAM clock/latency matters significantly with games, even with X3D.

I have personally tested this.

Sure, if you're running low refresh rate RAM speed won't matter as much, but with 165 Hz+ averages go up, as do minimums with faster RAM tune, so you can still see benefit at 120 Hz or even 60 Hz. Getting sub 60 ns is important for either Intel/AMD. Peak tune is around 55 ns for AMD and 45 ns for Intel. Bandwidth also matters for some games, but latency matters more for most.
That also depends on your play style. For me, a slow-paced, atmospheric, single player gamer, RAM speed and latency don't matter at all. It's probably different for players of fast-paced online shooters.
Posted on Reply
#150
dgianstefani
TPU Proofreader
AusWolfThat also depends on your play style. For me, a slow-paced, atmospheric, single player gamer, RAM speed and latency don't matter at all. It's probably different for players of fast-paced online shooters.
OK, set your memory to 4800 MT JEDEC and report back afterwards if you still believe this.

You're on a 144 Hz monitor, so your averages probably wouldn't be affected too much, but I'd wager almost anything you'd notice your minimum FPS dipping below 60 FPS quite frequently.

Regardless, RAM speed/latency objectively matters to gaming performance, even if you have an X3D chip, this is a statement of fact. Depending on your monitor refresh rate, your GPU power and the type of gamer you are, you may notice this more or less, but the underlying technical data is irrefutable.
Posted on Reply
Add your own comment
Nov 25th, 2024 20:51 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts