Thursday, May 23rd 2019

AMD X570 Unofficial Platform Diagram Revealed, Chipset Puts out PCIe Gen 4

AMD X570 is the company's first in-house design socket AM4 motherboard chipset, with the X370 and X470 chipsets being originally designed by ASMedia. With the X570, AMD hopes to leverage new PCI-Express gen 4.0 connectivity of its Ryzen 3000 Zen2 "Matisse" processors. The desktop platform that combines a Ryzen 3000 series processor with X570 chipset is codenamed "Valhalla." A rough platform diagram like what you'd find in motherboard manuals surfaced on ChipHell, confirming several features. To maintain pin-compatibility with older generations of Ryzen processors, Ryzen 3000 has the same exact connectivity from the SoC except two key differences.

On the AM4 "Valhalla" platform, the SoC puts out a total of 28 PCI-Express gen 4.0 lanes. 16 of these are allocated to PEG (PCI-Express graphics), configurable through external switches and redrivers either as single x16, or two x8 slots. Besides 16 PEG lanes, 4 lanes are allocated to one M.2 NVMe slot. The remaining 4 lanes serve as the chipset bus. With X570 being rumored to support gen 4.0 at least upstream, the chipset bus bandwidth is expected to double to 64 Gbps. Since it's an SoC, the socket is also wired to LPCIO (SuperIO controller). The processor's integrated southbridge puts out two SATA 6 Gbps ports, one of which is switchable to the first M.2 slot; and four 5 Gbps USB 3.x ports. It also has an "Azalia" HD audio bus, so the motherboard's audio solution is directly wired to the SoC. Things get very interesting with the connectivity put out by the X570 chipset.
Update May 21st: There is also information on the X570 chipset's TDP.
Update May 23rd: HKEPC posted what looks like an official AMD slide with a nicer-looking platform map. It confirms that AMD is going full-tilt with PCIe gen 4, both as chipset bus, and as downstream PCIe connectivity.

AMD X570 overcomes the greatest shortcoming of the previous-generation X470 "Promontory" chipset - downstream PCIe connectivity. The X570 chipset appears to put out 16 downstream PCI-Express gen 4.0 lanes. Two of these are allocated to two M.2 slots with x4 wiring, each, and the rest as x1 links. From these links, three are put out as x1 slots, one lane drives an ASMedia ASM1143 controller (takes in one gen 3.0 x1 and puts out two 10 Gbps USB 3.x gen 2 ports); one lane driving the board's onboard 1 GbE controller (choices include Killer E2500 or Intel i211-AT or even Realtek 2.5G); and one lane towards an 802.11ax WLAN card such as the Intel "Cyclone Peak." Other southbridge connectivity includes a 6-port SATA 6 Gbps RAID controller, four 5 Gbps USB 3.x gen 1 ports, and four USB 2.0/1.1 ports.

Update May 21st: The source also mentions the TDP of the AMD X570 chipset to be at least 15 Watts, a 3-fold increase over the X470 with its 5W TDP. This explains why every X570-based motherboard picture leak we've seen thus far shows a fan-heatsink over the chipset.
Sources: ChipHell Forums, HKEPC
Add your own comment

75 Comments on AMD X570 Unofficial Platform Diagram Revealed, Chipset Puts out PCIe Gen 4

#51
nemesis.ie
More than likely it will be a Gen 4 x4 NVME.
Posted on Reply
#53
Valantar
springs113A little off topic but I wonder if the next generations of consoles are utilizing gen4 pci-e? I saw a Sony demo the other day and the loading times were fantastic. If such speeds is expected on a console, then in a highend PC, that should be a given. Wouldn't mind a full motherboard rgb block that covers the chipset as well. I need a lil bling in my life.
nemesis.ieMore than likely it will be a Gen 4 x4 NVME.
Yeah, sounds likely. The cost difference between an off-the-shelf PCIe 4.0 NVMe controller and a similar PCIe 3.0 one ought to be negligible (though OEMs are likely to charge a premium for them, at least at first), so if Sony is going NVMe with the PS5 there's little reason to expect it not to be PCIe 4.0. Then again, they'll need to change the I/O scheme drastically from the PS4, where all I/O is handled through the ARM SoC in the chipset. Unless they've had AMD design that as well? They do have an ARM licence, so who knows?
Posted on Reply
#54
HTC
Spotted the below pic @ Anandtech's forums:



Original source (german).
Posted on Reply
#55
Valantar
HTCSpotted the below pic @ Anandtech's forums:



Original source (german).
... you haven't seen the updated news post on the front page then? It's the very one we're discussing in the comments of ;)
Posted on Reply
#56
HTC
Valantar... you haven't seen the updated news post on the front page then? It's the very one we're discussing in the comments of ;)
I had not ... facepalm ... oooops ...
Posted on Reply
#57
newtekie1
Semi-Retired Folder
Valantar...which is exactly what PCIe 4.0 allows for. How? By doubling bandwidth per lane. A PCIe 4.0 x2 SSD can match the performance of a PCIe 3.0 x4 SSD, meaning that you can run two fast SSDs off the same number of lanes as one previously. A single 4.0 lane is enough for a 10GbE NIC, where you previously needed two lanes. And so on and so forth. GPUs won't need more than x8 PCIe 4.0 for the foreseeable future (and in reality hardly more than x4 for most GPUs), so splitting off lanes for storage or other controllers is less of an issue. Sure, performance (or the advantage of splitting lanes) is lost if using PCIe 3.0 devices, but there is flexibility to be had - for example a motherboard might have two m.2 slots where they share the latter two lanes (switchable in BIOS, ideally) so that you can run either two ~3.5GB/s SSDs or one faster than that. Motherboard trace routing will also become less complex if the thinking shifts this way, leading to potentially cheaper motherboards or more features at lower price points.
That all works only in theory. The theory falls apart however when you consider the fact that there aren't any PCI-E 4.0 devices, and there likely won't be any affordable ones in the usable lifespan of this chipset. Which means all those PCI-E 4.0 lanes will be running at half bandwidth PCI-E 3.0. PCI-E 4.0 is, at this point, just a marketing gimmick. Yeah, it might be nice to double the interconnect between the chipset and the CPU, but for actual lanes coming off the chipset, I'd rather have 32 PCI-E 3.0 lanes than 16 PCI-E 4.0 right now.
TheLostSwedeBased on? It's 16 lanes in total. Eight for M.2, one for Ethernet, one for Wi-Fi and six for expansion slots. You need more?
Technically external USB controllers shouldn't be needed, as all the USB 3 ports are 3.1 G2 and the chipset should support eight of them.
Yes, I'd like a 3rd PCI-E x16 slot wired electrically x8 instead of just x4. And even with just an x4 electrically wired x16 slot, you're down to 12 lanes. Two x4 M.2 slots and your down to 4 lanes left. One for WiFi and you're down to 3 lanes left. Two Gigabit ethernet ports and your down to 1 lane left. Two x1 slots and..oh wait you can't because you're out of lanes. Want to add another SATA controller? Nope, out of lanes. Another USB controller? Nope, out of lanes.
Posted on Reply
#58
Valantar
newtekie1That all works only in theory. The theory falls apart however when you consider the fact that there aren't any PCI-E 4.0 devices, and there likely won't be any affordable ones in the usable lifespan of this chipset. Which means all those PCI-E 4.0 lanes will be running at half bandwidth PCI-E 3.0. PCI-E 4.0 is, at this point, just a marketing gimmick. Yeah, it might be nice to double the interconnect between the chipset and the CPU, but for actual lanes coming off the chipset, I'd rather have 32 PCI-E 3.0 lanes than 16 PCI-E 4.0 right now.
The first consumer PCIe 4.0 NVMe controllers are already announced and will be arriving in SSDs this fall. They are targeted at the mainstream/upper mainstream NVMe market, i.e. WD Black or Samsung Evo-ish prices. Perfectly fine, in other words. And no doubt other AICs will start adopting the standard over the next couple of years. It'll take time, sure, but it will happen. And the "usable lifespan of this chipset" is 5+ years, and I can guarantee you there'll be plenty of PCIe 4.0 devices by that time.
Posted on Reply
#59
newtekie1
Semi-Retired Folder
ValantarThe first consumer PCIe 4.0 NVMe controllers are already announced and will be arriving in SSDs this fall. They are targeted at the mainstream/upper mainstream NVMe market, i.e. WD Black or Samsung Evo-ish prices. Perfectly fine, in other words. And no doubt other AICs will start adopting the standard over the next couple of years. It'll take time, sure, but it will happen. And the "usable lifespan of this chipset" is 5+ years, and I can guarantee you there'll be plenty of PCIe 4.0 devices by that time.
I'd like to know where you are getting your pricing for these SSD. Nothing I've seen mention what planned pricing is. Furthermore, PCI-E 4.0 on an M.2 SSD isn't going to make a noticeable difference. We are hitting diminishing returns as it is. There is almost no noticeable difference between an x2 SSD and an x4 as it is, do you really think doubling the bandwidth again is going to make a sudden difference? No, it isn't. It will be nothing more than a marketing gimmick, with "rated" sequential reads that are huge, but actual performance that isn't really better than other PCI-E 3.0 drives.

Sure, you can make the argument that they can use PCI-E 4.0 x2 drives and only use 2 of the 16 lanes. But that also doesn't work. The reason being that motherboard manufacturers are not going to put x2 M.2 slots on their boards unless the absolutely have to(because they are out of PCI-E lanes). The reason being that most consumers are going to be putting PCI-E 3.0 drives in the slots, so they will be limited to PCI-E x2 3.0, and that looks bad on paper.

Sure, people will have these boards in use for 5+ years, but that is not what I mean by usable lifespan of the chipset. The usable lifespan of the chipset is how long manufacturers will be designing motherboards around this chipset. That lifespan is likely a year, maybe 2. And the fact is, in that time it is unlikely they will actually be designing boards with PCI-E 4.0 as the priority. They will still be designing boards assuming people will be using PCI-E 3.0 devices, because it doesn't make sense to design boards for use 3 years after it is bought.
Posted on Reply
#60
qcmadness
newtekie1I'd like to know where you are getting your pricing for these SSD. Nothing I've seen mention what planned pricing is. Furthermore, PCI-E 4.0 on an M.2 SSD isn't going to make a noticeable difference. We are hitting diminishing returns as it is. There is almost no noticeable difference between an x2 SSD and an x4 as it is, do you really think doubling the bandwidth again is going to make a sudden difference? No, it isn't. It will be nothing more than a marketing gimmick, with "rated" sequential reads that are huge, but actual performance that isn't really better than other PCI-E 3.0 drives.

Sure, you can make the argument that they can use PCI-E 4.0 x2 drives and only use 2 of the 16 lanes. But that also doesn't work. The reason being that motherboard manufacturers are not going to put x2 M.2 slots on their boards unless the absolutely have to(because they are out of PCI-E lanes). The reason being that most consumers are going to be putting PCI-E 3.0 drives in the slots, so they will be limited to PCI-E x2 3.0, and that looks bad on paper.

Sure, people will have these boards in use for 5+ years, but that is not what I mean by usable lifespan of the chipset. The usable lifespan of the chipset is how long manufacturers will be designing motherboards around this chipset. That lifespan is likely a year, maybe 2. And the fact is, in that time it is unlikely they will actually be designing boards with PCI-E 4.0 as the priority. They will still be designing boards assuming people will be using PCI-E 3.0 devices, because it doesn't make sense to design boards for use 3 years after it is bought.
amp.tomshardware.com/news/ssd-pcie-4.0-phison-nvme,38418.html
Posted on Reply
#61
Totally
TheLostSwedeSo we should stop at PCIe 3.0 and call it a day? Motherboards have always been ahead of graphics cards, be it when VL-Bus, PCI, AGP or PCI Express came out.
It's kind of how it has to work. Obviously with PCIe, we haven't had to change the physical interface for a few generations, so it has been a lot easier than in the past to transition to a new, faster version. Pointless is a very strong word in this case and you also seem to have missed the fact that there will be PCIe 4.0 NVMe SSDs coming out soon, which will reap benefits from the faster interface. How useful the extra speed will be to most people is a different matter. Also, as I mentioned elsewhere, this will allow for a single PCIe lane on 10Gbps Ethernet cards which might make them more affordable and more common.
There's no reasoning with those who think like this gpus and add-on cards could have arrived with the spec before motherboards and they'd still have the gall to malign them because motherboards don't support the spec yet. Reason mbs tend to get new specs first is because typically their upgrade cycle is much longer than any other component in the system.
Posted on Reply
#62
my_name_is_earl
AMD couldn't even compete for the crown in Gen3. Gen4 is not gonna save them.
Posted on Reply
#63
nemesis.ie
@newtekie1 It looks like these new NVMEs will have 900,000+ IOPS, that should definitely be noticeable.

It's the lower QD/small R/W tasks that are the reason we don't see much real-world difference in current products.

3x the IOPS will move things along nicely even if the b/w has only gone up 30%.

@my_name_is_earl What are you on about? AMD's platforms have more lanes than their competitor.

Maybe wait until Tuesday before you respond?
Posted on Reply
#64
Valantar
newtekie1I'd like to know where you are getting your pricing for these SSD. Nothing I've seen mention what planned pricing is. Furthermore, PCI-E 4.0 on an M.2 SSD isn't going to make a noticeable difference. We are hitting diminishing returns as it is. There is almost no noticeable difference between an x2 SSD and an x4 as it is, do you really think doubling the bandwidth again is going to make a sudden difference? No, it isn't. It will be nothing more than a marketing gimmick, with "rated" sequential reads that are huge, but actual performance that isn't really better than other PCI-E 3.0 drives.

Sure, you can make the argument that they can use PCI-E 4.0 x2 drives and only use 2 of the 16 lanes. But that also doesn't work. The reason being that motherboard manufacturers are not going to put x2 M.2 slots on their boards unless the absolutely have to(because they are out of PCI-E lanes). The reason being that most consumers are going to be putting PCI-E 3.0 drives in the slots, so they will be limited to PCI-E x2 3.0, and that looks bad on paper.

Sure, people will have these boards in use for 5+ years, but that is not what I mean by usable lifespan of the chipset. The usable lifespan of the chipset is how long manufacturers will be designing motherboards around this chipset. That lifespan is likely a year, maybe 2. And the fact is, in that time it is unlikely they will actually be designing boards with PCI-E 4.0 as the priority. They will still be designing boards assuming people will be using PCI-E 3.0 devices, because it doesn't make sense to design boards for use 3 years after it is bought.
Motherboard manufacturers are very fond of sharing lanes across slots/ports/devices, and it would be entirely possible for them to stuff a board to the gills with m.2 slots while making some of them switchable 2-lane setups. 2-lane PCIe 3.0 drives are already very popular, and while I agree that limiting a motherboard slot to PCIe 3.0 x2, more slots are always better. I for one would love if a board came with five m.2 slots - one 4.0 x4 from the CPU, two 4.0 x4 from the chipset, with both of these switchable separately to x2, enabling the last two slots at x2. That way you'd have all the SSD space you could possibly need, while maintaining performance. You could have three full-speed >8GBps (theoretical) drives, or three 3.0 x4 drives, or a mix of various types and interface widths. Have an old 3.0 x4 drive, an old 3.0 x2 drive, and a new 4.0 x2 drive? You'd still be able to fit in two more x2 drives or a single x4 drive. Yes, it would require users to RTFM, but that's already quite common with all the "m.2 slot 1 disables SATA_1 and SATA_0" etc. going on today.

As for pricing, there's nothing announced, but there is zero reason to expect them to drastically increase in price over today's most common drives. I'd be very surprised if they were as expensive as the 970 Pro, and prices are bound to drop quickly as more options arrive. An relatively minor interface upgrade for a commodity like an SSD is hardly grounds for a doubling in price.
Posted on Reply
#65
newtekie1
Semi-Retired Folder
qcmadnessamp.tomshardware.com/news/ssd-pcie-4.0-phison-nvme,38418.html
I'm not getting your point. Are you trying to prove that PCI-E 4.0 SSDs are coming? I never disputed that.
ValantarMotherboard manufacturers are very fond of sharing lanes across slots/ports/devices, and it would be entirely possible for them to stuff a board to the gills with m.2 slots while making some of them switchable 2-lane setups.
I wouldn't say any motherboard manufacturer is fond of sharing lanes. They do it because they have to. Heck the diagrams for x570 show that they are already doing that. The second M.2 slots in the first diagram is shared with the third PCI-E x16(electrically x4:rolleyes:) slot. So plugging in an additional M.2 drive disables the PCI-E slot.

My entire point is this shouldn't have to be a necessity. Give me enough lanes so that when I plug in an M.2 drive, my RAID card doesn't stop working.
Valantar2-lane PCIe 3.0 drives are already very popular
I wouldn't say they are popular. They are just a think that exists. And they exist because the drives themselves can't really use more than a x2 connection anyway. If you are saying those are very popular and hence what everyone is buying, then there really is no need for PCI-E 4.0 drives, as I said.
ValantarAs for pricing, there's nothing announced, but there is zero reason to expect them to drastically increase in price over today's most common drives.
You don't have a good grasp on how the tech world works, do you? The latest bleeding edge technology, especially when it is the "fastest on the market" is never cheap. if they can throw a marketing gimmick in the specs that's new and faster, the price will be higher. Even if actual performance isn't. Hell, M.2 SATA drives have historically been more expensive than their 2.5" counterparts for the only reason being M.2 is "new and fancy" so they figure they can charge 5% more.
Posted on Reply
#66
Valantar
newtekie1I wouldn't say any motherboard manufacturer is fond of sharing lanes. They do it because they have to. Heck the diagrams for x570 show that they are already doing that. The second M.2 slots in the first diagram is shared with the third PCI-E x16(electrically x4:rolleyes:) slot. So plugging in an additional M.2 drive disables the PCI-E slot.
...so there's nothing stopping an implementation like the one I outlined then. Arguably, an extra m.2 slot will be more useful than a PCIe slot for most users.
newtekie1My entire point is this shouldn't have to be a necessity. Give me enough lanes so that when I plug in an M.2 drive, my RAID card doesn't stop working.
Are you then prepared to pay >$250 for a mid-range motherboard with a 25W-ish chipset TDP? If so, you could probably get what you want. Or, you know, go HEDT. What you're asking for is a lot of the reason why HEDT motherboards are expensive - more PCB layers to accomodate more PCIe and memory channels. Mainstream platforms are for mainstream users, the vast majority of whom have no more than 1 GPU (and likely a GTX 1060 at best), 1 SSD - which might very well be SATA - and maybe an HDD. The 16 lanes off the chipset is plenty for even "mainstream enthusiasts", giving room for more m.2 SSDs, NICs and so on. And as always, you'll get x8/x8 SLI/CF.

Also, if your PC contains enough SSDs to require that last NVMe slot, and enough HDDs to require a RAID card, you should consider spinning your storage array out into a NAS or storage server. Then you won't have to waste lanes on a big GPU in that, making room for more controllers, SSDs and whatnot, while making your main PC less power hungry. Not keeping all your eggs in one basket is smart, particularly when it comes to storage. And again, if you can afford a RAID card and a bunch of NVMe SSDs, you can afford to set up a NAS.
newtekie1I wouldn't say they are popular. They are just a think that exists. And they exist because the drives themselves can't really use more than a x2 connection anyway. If you are saying those are very popular and hence what everyone is buying, then there really is no need for PCI-E 4.0 drives, as I said.
They are popular, because they are cheap. They are cheap because the controllers are simpler than x4 drives - mostly in both internal lanes and external PCIe, but having a narrower PCIe interface is a significant cost savings, which won't go away when moving to 4.0 even if they double up on internal lanes to increase performance. In other words, unless PCIe 4.0 controllers are extremely complex to design and manufacture, a PCIe 4.0 x2 SSD controller will be cheaper than a similarly performing 3.0 x4 controller.
newtekie1You don't have a good grasp on how the tech world works, do you? The latest bleeding edge technology, especially when it is the "fastest on the market" is never cheap. if they can throw a marketing gimmick in the specs that's new and faster, the price will be higher. Even if actual performance isn't. Hell, M.2 SATA drives have historically been more expensive than their 2.5" counterparts for the only reason being M.2 is "new and fancy" so they figure they can charge 5% more.
Phison and similar controller makers don't have the brand recognition or history of high-end performance to sell drives at proper "premium" NVMe prices - pretty much only Samsung does (outside of the enterprise/server space, that is, where prices are bonkers as always). Will they charge a premium for a 4.0 controller over a similar 3.0 one? Of course. But it won't be that much, as it wouldn't sell. Besides, even for the 970 Pro the flash is the main cost driver, not the controller. There's no doubt 4.0 drives will demand a premium, but as I said, I would be very surprised if they came close to the 970 Pro (which, for reference, is $100 more for 1TB compared to the Evo).
Posted on Reply
#67
newtekie1
Semi-Retired Folder
Valantar...so there's nothing stopping an implementation like the one I outlined then. Arguably, an extra m.2 slot will be more useful than a PCIe slot for most users.
Of course there is nothing stopping it. But it drives up cost to add PCI-E switches and it still isn't ideal. Just outright having more PCI-E lanes available from the beginning is the better solution.
ValantarAre you then prepared to pay >$250 for a mid-range motherboard with a 25W-ish chipset TDP? If so, you could probably get what you want. Or, you know, go HEDT. What you're asking for is a lot of the reason why HEDT motherboards are expensive - more PCB layers to accomodate more PCIe and memory channels. Mainstream platforms are for mainstream users, the vast majority of whom have no more than 1 GPU (and likely a GTX 1060 at best), 1 SSD - which might very well be SATA - and maybe an HDD. The 16 lanes off the chipset is plenty for even "mainstream enthusiasts", giving room for more m.2 SSDs, NICs and so on. And as always, you'll get x8/x8 SLI/CF.
The number of GPUs has nothing to do with the discussion. The GPU gets it's lanes from the CPU, not the chipset. These downstream lanes off the chipset are what I'm talking about and there are already boards that are running out of them.

As for a 25w TDP, no that also is unreasonable and if it was that high then that is also a fault of AMD. The Z390 gives 24 downstream lanes and has a TDP of 6w, and it's also providing more I/O than the X570 would be. The fact is, thanks to AMD's better SoC style platform and the CPU doing a lot of the I/O that Intel still has to rely on the southbridge to handle, the X570 has a major advantage when it comes to TDP thanks to needing to do less work. And I'd also guess the high 15w TDP estimates of the X570 come down to the fact that they are using PCI-E 4.0.

So, again, at this point in time I would rather them put more PCI-E 3.0 lanes in and not bother with PCI-E 4.0 in the consumer chipset. The more lanes will allow better motherboard designs without the need for switching and port disabling. It would likely lower the TDP as well.

And the mainstream users are likely not using X570 either. They are likely going for the B series boards, so likely B550. They buy less expensive boards, with less extra features, that require less PCI-E lanes. But enthusiasts that buy X570 boards, expect those boards to be loaded with extra features, and most of those extras run off PCI-E lanes.
ValantarPhison and similar controller makers don't have the brand recognition or history of high-end performance to sell drives at proper "premium" NVMe prices - pretty much only Samsung does (outside of the enterprise/server space, that is, where prices are bonkers as always). Will they charge a premium for a 4.0 controller over a similar 3.0 one? Of course. But it won't be that much, as it wouldn't sell. Besides, even for the 970 Pro the flash is the main cost driver, not the controller. There's no doubt 4.0 drives will demand a premium, but as I said, I would be very surprised if they came close to the 970 Pro (which, for reference, is $100 more for 1TB compared to the Evo).
Phison isn't going to be selling drives to the consumer, they never have AFAIK. So it doesn't matter how well know they are to the consumer, they are very well known to the drive manufacturers. They sell the controllers to drive manufacturers, and the drive manufacturers sell the drives to consumers. Phison will charge more for their controller, and the drive manufactures will charge more for the end drives. They will charge more because the controller costs more, as well as the NAND to get actual higher rated speed costs more, and the have the marketing gimmick of PCI-E 4.0.
Posted on Reply
#68
Valantar
newtekie1Of course there is nothing stopping it. But it drives up cost to add PCI-E switches and it still isn't ideal. Just outright having more PCI-E lanes available from the beginning is the better solution.
Implementing switchable PCIe through the chipset is free, as the functionality is built in. The only thing driving up costs would be adding the required lanes and ports, which you're asking for more of, not less.
newtekie1The number of GPUs has nothing to do with the discussion. The GPU gets it's lanes from the CPU, not the chipset. These downstream lanes off the chipset are what I'm talking about and there are already boards that are running out of them.
But PCIe lanes are PCIe lanes. If you need more than the 16 off the chipset, use the second x16 slot from the CPU. Your GPU will lose maybe 1% of performance at worst, and you'll get 8 more PCIe lanes to play around with. And again, if that 1% of performance is so important to you, buy an HEDT platform.
newtekie1As for a 25w TDP, no that also is unreasonable and if it was that high then that is also a fault of AMD. The Z390 gives 24 downstream lanes and has a TDP of 6w, and it's also providing more I/O than the X570 would be. The fact is, thanks to AMD's better SoC style platform and the CPU doing a lot of the I/O that Intel still has to rely on the southbridge to handle, the X570 has a major advantage when it comes to TDP thanks to needing to do less work. And I'd also guess the high 15w TDP estimates of the X570 come down to the fact that they are using PCI-E 4.0.
Yes, the TDP is obviously due to PCIe 4.0 - higher frequencies means more power. That's a given. And 15W is perfectly fine (especially as it's only likely to pull that much power under heavy loads, which will be infrequent), but 25W would be problematic as you won't be able to cool that well passively without interfering with long AICs.
newtekie1So, again, at this point in time I would rather them put more PCI-E 3.0 lanes in and not bother with PCI-E 4.0 in the consumer chipset. The more lanes will allow better motherboard designs without the need for switching and port disabling. It would likely lower the TDP as well.
Well, tough luck I guess. I'm more interested in a more future-proof platform, and I'm reasonably sure that I'll be more than happy with 16+16 PCIe 4.0 lanes. I'm more interested in the push for adoption of a newer, faster standard (which will inevitably lead to cheaper storage at 3.0 x4 speeds once the "new standard" premium wears off and 2-channel 4.0 controllers proliferate) than I am in stuffing dozens of devices into my PC.

And of course, yes, more 3.0 lanes would allow for more ports/slots/devices without the need for switching, and likely lower the TDP of the chipset. But it would also drive up motherboard costs as implementing all of those PCIe lanes will require more complex PCBs. The solution, as with cheaper Z3xx boards, will likely be that a lot of those lanes are left unused.
newtekie1And the mainstream users are likely not using X570 either. They are likely going for the B series boards, so likely B550. They buy less expensive boards, with less extra features, that require less PCI-E lanes. But enthusiasts that buy X570 boards, expect those boards to be loaded with extra features, and most of those extras run off PCI-E lanes.
That's not quite true. Of course, it's possible that X570 will demand more of a premium than X470 or X370, and yes, there are a lot of people using Bx50 boards, but the vast majority of people on X3/470 are still very solidly in the "mainstream" category, and have relatively few PCIe devices.
newtekie1Phison isn't going to be selling drives to the consumer, they never have AFAIK. So it doesn't matter how well know they are to the consumer, they are very well known to the drive manufacturers. They sell the controllers to drive manufacturers, and the drive manufacturers sell the drives to consumers. Phison will charge more for their controller, and the drive manufactures will charge more for the end drives. They will charge more because the controller costs more, as well as the NAND to get actual higher rated speed costs more, and the have the marketing gimmick of PCI-E 4.0.
No, they won't but they will be selling them to OEMs. Which OEMs? Not Samsung - which has the premium NVMe market cornered - and not WD, which is the current NVMe price/perf king. So they're left with brands with less stellar reputations, which means they'll be less able to sell products at ultra-premium prices, no matter the performance. Sure, some will try with exorbitant MSRPs, but prices inevitably drop once products hit the market. It's obvious that some will use PCIe 4.0 as a sales gimmick (with likely only QD>32 sequential reads exceeding PCIe 3.0 x4 speeds, if that), but in a couple of years the NVMe market is likely to have begun a wholesale transition to 4.0 with no real added cost. If AMD didn't move to 4.0 now, that move would happen an equivalent time after whenever 4.0 became available - in other words, we'd have to wait for a long time to get faster storage. The job of an interface is to provide plentiful performance for connected devices. PCIe 3.0 is reaching a point where it doesn't quite do that any more, so the move to 4.0 is sensible and timely. Again, it's obvious that there will be few devices available in the beginning, but every platform needs to start somewhere, and postponing the platform also means postponing everything else, which is a really bad plan.
Posted on Reply
#69
nemesis.ie
The other advantage is that the 2nd full-speed PCIe slot is just that, the chipset lanes are all sharing a single 4x (or similar DMI onm Intel) which could be a big bottleneck, it will not cope with 2 x M.2 PCI.4 at full speed so those are actually better off in a slot.

Given the speed of USB 3.1 versus most peripherals, a hub is probably better (again for those that need it) than putting a pile of USB on every motherboard.

Someone could in theory also build a monster i/o card giving out 2x the amount the chipset does (8x versus 4) for the few people that want more I/O without going to HEDT.

As you say, the loss from 16x to 8x for a GPU is currently very low even on 3.0
Posted on Reply
#70
newtekie1
Semi-Retired Folder
ValantarImplementing switchable PCIe through the chipset is free, as the functionality is built in. The only thing driving up costs would be adding the required lanes and ports, which you're asking for more of, not less.
No, it's not. It requires extra components on the board and extra programming in the BIOS. Neither of which is free.
ValantarBut PCIe lanes are PCIe lanes. If you need more than the 16 off the chipset, use the second x16 slot from the CPU. Your GPU will lose maybe 1% of performance at worst, and you'll get 8 more PCIe lanes to play around with. And again, if that 1% of performance is so important to you, buy an HEDT platform.
Except, AFAIK, that isn't allowed. The other 8 lanes from the CPU have to be wired to a PCI-E slot. AMD doesn't allow you to use them as general purpose lanes. And, as much as you and I know that dropping the primary GPU down to x8 doesn't really affect performance, no one wants a motherboard that just always runs the single GPU at x8. Just look at how many threads we get here of people freaking out because their GPU isn't running at x16.
ValantarYes, the TDP is obviously due to PCIe 4.0 - higher frequencies means more power. That's a given. And 15W is perfectly fine (especially as it's only likely to pull that much power under heavy loads, which will be infrequent), but 25W would be problematic as you won't be able to cool that well passively without interfering with long AICs.
ValantarWell, tough luck I guess. I'm more interested in a more future-proof platform, and I'm reasonably sure that I'll be more than happy with 16+16 PCIe 4.0 lanes. I'm more interested in the push for adoption of a newer, faster standard (which will inevitably lead to cheaper storage at 3.0 x4 speeds once the "new standard" premium wears off and 2-channel 4.0 controllers proliferate) than I am in stuffing dozens of devices into my PC.

And of course, yes, more 3.0 lanes would allow for more ports/slots/devices without the need for switching, and likely lower the TDP of the chipset. But it would also drive up motherboard costs as implementing all of those PCIe lanes will require more complex PCBs. The solution, as with cheaper Z3xx boards, will likely be that a lot of those lanes are left unused.
I'm not complaining about 15w, I'm countering the point that adding more lanes would cause the chipset to use 25w or whatever. That would not be the case with more PCI-E 3.0 lanes, and that is my point. I'd rather have a 15w chipset with 32 PCI-E 3.0 lanes than 16 PCI-E 4.0 lanes.

The lanes from the CPU are already PCI-E 4.0. That covers your GPUs and a high end PCI-E 4.0 M.2 SSD when you want to upgrade to one in the future. Make the chipset put out PCI-E 3.0 lanes and give more flexibility with more lanes. You're still getting your futureproofing, you're still getting your adoption of a new standard, and you also get more flexibility to add more components to the motherboard without forcing the consumer to make a decision between what they want to use.
ValantarNo, they won't but they will be selling them to OEMs. Which OEMs? Not Samsung - which has the premium NVMe market cornered - and not WD, which is the current NVMe price/perf king. So they're left with brands with less stellar reputations, which means they'll be less able to sell products at ultra-premium prices, no matter the performance. Sure, some will try with exorbitant MSRPs, but prices inevitably drop once products hit the market. It's obvious that some will use PCIe 4.0 as a sales gimmick (with likely only QD>32 sequential reads exceeding PCIe 3.0 x4 speeds, if that), but in a couple of years the NVMe market is likely to have begun a wholesale transition to 4.0 with no real added cost. If AMD didn't move to 4.0 now, that move would happen an equivalent time after whenever 4.0 became available - in other words, we'd have to wait for a long time to get faster storage. The job of an interface is to provide plentiful performance for connected devices. PCIe 3.0 is reaching a point where it doesn't quite do that any more, so the move to 4.0 is sensible and timely. Again, it's obvious that there will be few devices available in the beginning, but every platform needs to start somewhere, and postponing the platform also means postponing everything else, which is a really bad plan.
So, what you're saying, is the only PCI-E 4.0 NVMe SSD controller we've seen so far, won't be used by either of the two biggest well know SSD manufacturers(it won't likely be used by Micron either so that's actually the 3 biggest SSD manufacturers). Yeah, those PCI-E 4.0 controller are ready to go mainstream I tell ya!

And, like I said, it isn't like the platform wouldn't have a PCI-E 4.0 M.2 slot for the future anyway. Remember, I'm not arguing to completely get rid of PCI-E 4.0, the CPU would still be putting out PCI-E 4.0 lanes. So there would still be a slot available when the time comes that you actually want to buy an PCI-E 4.0 M.2.
Posted on Reply
#71
Valantar
newtekie1No, it's not. It requires extra components on the board and extra programming in the BIOS. Neither of which is free.
At worst it requires some very minor components to switch the lanes from one path to another. The few cents those cost are nothing compared to the price of a couple of extra PCB layers.
newtekie1Except, AFAIK, that isn't allowed. The other 8 lanes from the CPU have to be wired to a PCI-E slot. AMD doesn't allow you to use them as general purpose lanes. And, as much as you and I know that dropping the primary GPU down to x8 doesn't really affect performance, no one wants a motherboard that just always runs the single GPU at x8. Just look at how many threads we get here of people freaking out because their GPU isn't running at x16.
You should tell that to all the SFF fans using bifurcated risers from the x16 slot on their ITX boards to run SSDs alongside their GPUs, or other PCIe AICs like 10GbE NICs. Heck, a few motherboards even support "trifurcation" into x8+x4+x4 with a suitable riser. They're not advertised as general purpose lanes, but if you connect something to them, they work. PCIe is PCIe. The general rule for any x16/x8+x8 motherboard is that your GPU will get half the bandwidth if you're not careful where you stick your WiFi card or whatever else you want to install. It's always been that way.
newtekie1I'm not complaining about 15w, I'm countering the point that adding more lanes would cause the chipset to use 25w or whatever. That would not be the case with more PCI-E 3.0 lanes, and that is my point. I'd rather have a 15w chipset with 32 PCI-E 3.0 lanes than 16 PCI-E 4.0 lanes.
Again: this puts you in a tiny minority among MSDT users. Most will never, ever come close to using all their PCIe lanes. Again: you seem to be a personification of the target group for HEDT systems - someone wanting boatloads of PCIe. And what you want would drive up motherboard prices for everyone else for no good reason. PCIe traces are complicated and expensive to implement.
newtekie1The lanes from the CPU are already PCI-E 4.0. That covers your GPUs and a high end PCI-E 4.0 M.2 SSD when you want to upgrade to one in the future. Make the chipset put out PCI-E 3.0 lanes and give more flexibility with more lanes. You're still getting your futureproofing, you're still getting your adoption of a new standard, and you also get more flexibility to add more components to the motherboard without forcing the consumer to make a decision between what they want to use.
There's no future proofing if all the output lanes are 3.0. Want to add a 4.0 x1 10GbE NIC in a couple of years? Yeah, sorry, it'll run at half speed. Want a TB4 controller when those come out? Or a USB 3.2G2x2 (or whatever the ¤%!&@! it's called) controller that doesn't eat a full four lanes? Sorry, no can do. I agree that a chipset with a 3.0 switch but a 4.0 uplink is far better than 3.0 all around, but given that PCs bought today are likely to be in service for the better part of the next decade, not wanting to future-proof the I/O with the fastest possible standards is rather silly.
newtekie1So, what you're saying, is the only PCI-E 4.0 NVMe SSD controller we've seen so far, won't be used by either of the two biggest well know SSD manufacturers(it won't likely be used by Micron either so that's actually the 3 biggest SSD manufacturers). Yeah, those PCI-E 4.0 controller are ready to go mainstream I tell ya!
Okay, you seem not to be actually reading. Have I said that there's a crapton of 4.0 SSDs around the corner? No. I've said - quite explicitly - that a key thing is to get 4.0-capable platforms out the gate so that component manufacturers get off their asses and start making products. And, as we've seen with SSDs, they will. Why do you think Samsung has been holding off on launching a 980 series? There's no way that's coming out without PCIe 4.0 support. And as always with new I/O standards, it'll take time for it to become common, so we have to get it going now if we want this to be available in 2-3 years rather than 4-5. If there weren't platforms coming for it, there wouldn't be PCIe 4.0 devices in production now either. This is why it's great for all PC enthusiasts that AMD is making this push at this exact time - the timing is very, very good.[/QUOTE]
Posted on Reply
#72
RichF
Penev9115 watts is nothing if there's an adequate heatsink. X58 and 990FX were both above 20W and didn't have active cooling. This is just lazy design on the motherboard manufacturer's behalf.
You're forgetting that they have to sacrifice some things, in terms of quality, to pay for the plastic shrouds, brand logos, and rainbow LEDs.

People talk about how primitive the fans are but it's not like tower VRM coolers are a new thing. Nor are boards with copper highly-finned coolers. But, clearly, we are advancing as an industry because rainbow LEDs, plastic shrouds, and false phase count claims, are where it's at.

I wonder if even one of the board sellers are going to bring feature parity between AMD and Intel. Intel boards for quad CPUs were given coolers that could be hooked up to a loop. AMD buyers, despite having Piledriver to power, were given the innovation of tiny fans. Yes, folks, the tiny fan innovation for AMD was most recently seen in the near-EOL AM3+ boards. Meanwhile, only Intel buyers were considered serious enough to have the option of making use of their loops without having to pay through the nose for an EK-style solution. (I'm trying to remember the year the first hybrid VRM cooler was sold to Intel buyers. 2011? There was some controversy over it being anodized aluminum but ASUS claimed it was safe. Nevertheless, it switched to copper shortly after. I believe Gigabyte sold hybrid-cooled boards as well, for Intel systems. The inclusion of hybrid cooling was not a one-off. It was multigenerational and expanded from ASUS to Gigabyte.)

MSI's person said no one wanted this but ASUS, at least, thought the return of the tiny fan was innovative.
Posted on Reply
#73
jeremyshaw
IceShroom"Zepline" die actually has 32 PCI-e lane.
On AM4 it only 24 lanes are activate for maybe compitability for reason with APU.
But on for Embedded server part all 32 Lanes are active.
www.amd.com/en/products/specifications/embedded/8161
Is that even the same die? It also has 2 10Gbe MACs onboard, which would be a very useful thing in a laptop (maybe not 10Gbe, but having to sacrifice one PCIe port for a Realtek/Killer/Intel NIC isn't helping, especially if the Ethernet controller already exists onboard).
Posted on Reply
Add your own comment
Nov 21st, 2024 10:34 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts