• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD X570 Unofficial Platform Diagram Revealed, Chipset Puts out PCIe Gen 4

Joined
Nov 24, 2017
Messages
853 (0.33/day)
Location
Asia
Processor Intel Core i5 4590
Motherboard Gigabyte Z97x Gaming 3
Cooling Intel Stock Cooler
Memory 8GiB(2x4GiB) DDR3-1600 [800MHz]
Video Card(s) XFX RX 560D 4GiB
Storage Transcend SSD370S 128GB; Toshiba DT01ACA100 1TB HDD
Display(s) Samsung S20D300 20" 768p TN
Case Cooler Master MasterBox E501L
Audio Device(s) Realtek ALC1150
Power Supply Corsair VS450
Mouse A4Tech N-70FX
Software Windows 10 Pro
Benchmark Scores BaseMark GPU : 250 Point in HD 4600
Strangely, I have heard the 28 lanes claim before. Maybe the socket itself has the pinout for them, but it's never used. Or the other 4 lanes are for different configurations (such as chipset-less configurations, like the A300/X300 "chipsets"), and considered to be exclusive of the "main" 24 lanes.

Either way, socket information for AM4 and especially FP5 are somewhat hard to find, compared to their Intel counterparts.
"Zepline" die actually has 32 PCI-e lane.
On AM4 it only 24 lanes are activate for maybe compitability for reason with APU.
But on for Embedded server part all 32 Lanes are active.
 
Joined
Jan 17, 2006
Messages
932 (0.14/day)
Location
Ireland
System Name "Run of the mill" (except GPU)
Processor R9 3900X
Motherboard ASRock X470 Taich Ultimate
Cooling Cryorig (not recommended)
Memory 32GB (2 x 16GB) Team 3200 MT/s, CL14
Video Card(s) Radeon RX6900XT
Storage Samsung 970 Evo plus 1TB NVMe
Display(s) Samsung Q95T
Case Define R5
Audio Device(s) On board
Power Supply Seasonic Prime 1000W
Mouse Roccat Leadr
Keyboard K95 RGB
Software Windows 11 Pro x64, insider preview dev channel
Benchmark Scores #1 worldwide on 3D Mark 99, back in the (P133) days. :)
@Valantar Exactly, that's the point. :) Just put your 4x M.2. card in and an I/O card and you are rocking and still have 2 full x16 slots to pop the GPUs in as well as all the chipset stuff.

@IceShroom Isn't it called "Zeppelin" like the air-ship?

I presume all that goes away with the CPU chiplet and moves to the I/O die. What exactly they have in there has kind of yet to be revealed fully.
 
Joined
May 24, 2007
Messages
1,116 (0.17/day)
Location
Florida
System Name Blackwidow/
Processor Ryzen 5950x / Threadripper 3960x
Motherboard Asus x570 Crosshair viii impact/ Asus Zenith ii Extreme
Cooling Ek 240Aio/Custom watercooling
Memory 32gb ddr4 3600MHZ Crucial Ballistix / 32gb ddr4 3600MHZ G.Skill TridentZ Royal
Video Card(s) MSI RX 6900xt/ XFX 6800xt
Storage WD SN850 1TB boot / Samsung 970 evo+ 1tb boot, 6tb WD SN750
Display(s) Sony A80J / Dual LG 27gl850
Case Cooler Master NR200P/ 011 Dynamic XL
Audio Device(s) On board/ Soundblaster ZXR
Power Supply Corsair SF750w/ Seasonic Prime Titanium 1000w
Mouse Razer Viper Ultimate wireless/ Logitech G Pro X Superlight
Keyboard Logitech G915 TKL/ Logitech G915 Wireless
Software Win 10 Pro
A little off topic but I wonder if the next generations of consoles are utilizing gen4 pci-e? I saw a Sony demo the other day and the loading times were fantastic. If such speeds is expected on a console, then in a highend PC, that should be a given. Wouldn't mind a full motherboard rgb block that covers the chipset as well. I need a lil bling in my life.
 
Joined
Jan 17, 2006
Messages
932 (0.14/day)
Location
Ireland
System Name "Run of the mill" (except GPU)
Processor R9 3900X
Motherboard ASRock X470 Taich Ultimate
Cooling Cryorig (not recommended)
Memory 32GB (2 x 16GB) Team 3200 MT/s, CL14
Video Card(s) Radeon RX6900XT
Storage Samsung 970 Evo plus 1TB NVMe
Display(s) Samsung Q95T
Case Define R5
Audio Device(s) On board
Power Supply Seasonic Prime 1000W
Mouse Roccat Leadr
Keyboard K95 RGB
Software Windows 11 Pro x64, insider preview dev channel
Benchmark Scores #1 worldwide on 3D Mark 99, back in the (P133) days. :)
More than likely it will be a Gen 4 x4 NVME.
 
Joined
Nov 24, 2017
Messages
853 (0.33/day)
Location
Asia
Processor Intel Core i5 4590
Motherboard Gigabyte Z97x Gaming 3
Cooling Intel Stock Cooler
Memory 8GiB(2x4GiB) DDR3-1600 [800MHz]
Video Card(s) XFX RX 560D 4GiB
Storage Transcend SSD370S 128GB; Toshiba DT01ACA100 1TB HDD
Display(s) Samsung S20D300 20" 768p TN
Case Cooler Master MasterBox E501L
Audio Device(s) Realtek ALC1150
Power Supply Corsair VS450
Mouse A4Tech N-70FX
Software Windows 10 Pro
Benchmark Scores BaseMark GPU : 250 Point in HD 4600
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
A little off topic but I wonder if the next generations of consoles are utilizing gen4 pci-e? I saw a Sony demo the other day and the loading times were fantastic. If such speeds is expected on a console, then in a highend PC, that should be a given. Wouldn't mind a full motherboard rgb block that covers the chipset as well. I need a lil bling in my life.
More than likely it will be a Gen 4 x4 NVME.
Yeah, sounds likely. The cost difference between an off-the-shelf PCIe 4.0 NVMe controller and a similar PCIe 3.0 one ought to be negligible (though OEMs are likely to charge a premium for them, at least at first), so if Sony is going NVMe with the PS5 there's little reason to expect it not to be PCIe 4.0. Then again, they'll need to change the I/O scheme drastically from the PS4, where all I/O is handled through the ARM SoC in the chipset. Unless they've had AMD design that as well? They do have an ARM licence, so who knows?
 

HTC

Joined
Apr 1, 2008
Messages
4,664 (0.77/day)
Location
Portugal
System Name HTC's System
Processor Ryzen 5 5800X3D
Motherboard Asrock Taichi X370
Cooling NH-C14, with the AM4 mounting kit
Memory G.Skill Kit 16GB DDR4 F4 - 3200 C16D - 16 GTZB
Video Card(s) Sapphire Pulse 6600 8 GB
Storage 1 Samsung NVMe 960 EVO 250 GB + 1 3.5" Seagate IronWolf Pro 6TB 7200RPM 256MB SATA III
Display(s) LG 27UD58
Case Fractal Design Define R6 USB-C
Audio Device(s) Onboard
Power Supply Corsair TX 850M 80+ Gold
Mouse Razer Deathadder Elite
Software Ubuntu 20.04.6 LTS
Spotted the below pic @ Anandtech's forums:

1-1080.e6b8f6cd.jpg

Original source (german).
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
  • Like
Reactions: HTC

HTC

Joined
Apr 1, 2008
Messages
4,664 (0.77/day)
Location
Portugal
System Name HTC's System
Processor Ryzen 5 5800X3D
Motherboard Asrock Taichi X370
Cooling NH-C14, with the AM4 mounting kit
Memory G.Skill Kit 16GB DDR4 F4 - 3200 C16D - 16 GTZB
Video Card(s) Sapphire Pulse 6600 8 GB
Storage 1 Samsung NVMe 960 EVO 250 GB + 1 3.5" Seagate IronWolf Pro 6TB 7200RPM 256MB SATA III
Display(s) LG 27UD58
Case Fractal Design Define R6 USB-C
Audio Device(s) Onboard
Power Supply Corsair TX 850M 80+ Gold
Mouse Razer Deathadder Elite
Software Ubuntu 20.04.6 LTS
... you haven't seen the updated news post on the front page then? It's the very one we're discussing in the comments of ;)

I had not ... facepalm ... oooops ...
 

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
28,473 (4.10/day)
Location
Indiana, USA
Processor Intel Core i7 10850K@5.2GHz
Motherboard AsRock Z470 Taichi
Cooling Corsair H115i Pro w/ Noctua NF-A14 Fans
Memory 32GB DDR4-3600
Video Card(s) RTX 2070 Super
Storage 500GB SX8200 Pro + 8TB with 1TB SSD Cache
Display(s) Acer Nitro VG280K 4K 28"
Case Fractal Design Define S
Audio Device(s) Onboard is good enough for me
Power Supply eVGA SuperNOVA 1000w G3
Software Windows 10 Pro x64
...which is exactly what PCIe 4.0 allows for. How? By doubling bandwidth per lane. A PCIe 4.0 x2 SSD can match the performance of a PCIe 3.0 x4 SSD, meaning that you can run two fast SSDs off the same number of lanes as one previously. A single 4.0 lane is enough for a 10GbE NIC, where you previously needed two lanes. And so on and so forth. GPUs won't need more than x8 PCIe 4.0 for the foreseeable future (and in reality hardly more than x4 for most GPUs), so splitting off lanes for storage or other controllers is less of an issue. Sure, performance (or the advantage of splitting lanes) is lost if using PCIe 3.0 devices, but there is flexibility to be had - for example a motherboard might have two m.2 slots where they share the latter two lanes (switchable in BIOS, ideally) so that you can run either two ~3.5GB/s SSDs or one faster than that. Motherboard trace routing will also become less complex if the thinking shifts this way, leading to potentially cheaper motherboards or more features at lower price points.

That all works only in theory. The theory falls apart however when you consider the fact that there aren't any PCI-E 4.0 devices, and there likely won't be any affordable ones in the usable lifespan of this chipset. Which means all those PCI-E 4.0 lanes will be running at half bandwidth PCI-E 3.0. PCI-E 4.0 is, at this point, just a marketing gimmick. Yeah, it might be nice to double the interconnect between the chipset and the CPU, but for actual lanes coming off the chipset, I'd rather have 32 PCI-E 3.0 lanes than 16 PCI-E 4.0 right now.

Based on? It's 16 lanes in total. Eight for M.2, one for Ethernet, one for Wi-Fi and six for expansion slots. You need more?
Technically external USB controllers shouldn't be needed, as all the USB 3 ports are 3.1 G2 and the chipset should support eight of them.

Yes, I'd like a 3rd PCI-E x16 slot wired electrically x8 instead of just x4. And even with just an x4 electrically wired x16 slot, you're down to 12 lanes. Two x4 M.2 slots and your down to 4 lanes left. One for WiFi and you're down to 3 lanes left. Two Gigabit ethernet ports and your down to 1 lane left. Two x1 slots and..oh wait you can't because you're out of lanes. Want to add another SATA controller? Nope, out of lanes. Another USB controller? Nope, out of lanes.
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
That all works only in theory. The theory falls apart however when you consider the fact that there aren't any PCI-E 4.0 devices, and there likely won't be any affordable ones in the usable lifespan of this chipset. Which means all those PCI-E 4.0 lanes will be running at half bandwidth PCI-E 3.0. PCI-E 4.0 is, at this point, just a marketing gimmick. Yeah, it might be nice to double the interconnect between the chipset and the CPU, but for actual lanes coming off the chipset, I'd rather have 32 PCI-E 3.0 lanes than 16 PCI-E 4.0 right now.
The first consumer PCIe 4.0 NVMe controllers are already announced and will be arriving in SSDs this fall. They are targeted at the mainstream/upper mainstream NVMe market, i.e. WD Black or Samsung Evo-ish prices. Perfectly fine, in other words. And no doubt other AICs will start adopting the standard over the next couple of years. It'll take time, sure, but it will happen. And the "usable lifespan of this chipset" is 5+ years, and I can guarantee you there'll be plenty of PCIe 4.0 devices by that time.
 

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
28,473 (4.10/day)
Location
Indiana, USA
Processor Intel Core i7 10850K@5.2GHz
Motherboard AsRock Z470 Taichi
Cooling Corsair H115i Pro w/ Noctua NF-A14 Fans
Memory 32GB DDR4-3600
Video Card(s) RTX 2070 Super
Storage 500GB SX8200 Pro + 8TB with 1TB SSD Cache
Display(s) Acer Nitro VG280K 4K 28"
Case Fractal Design Define S
Audio Device(s) Onboard is good enough for me
Power Supply eVGA SuperNOVA 1000w G3
Software Windows 10 Pro x64
The first consumer PCIe 4.0 NVMe controllers are already announced and will be arriving in SSDs this fall. They are targeted at the mainstream/upper mainstream NVMe market, i.e. WD Black or Samsung Evo-ish prices. Perfectly fine, in other words. And no doubt other AICs will start adopting the standard over the next couple of years. It'll take time, sure, but it will happen. And the "usable lifespan of this chipset" is 5+ years, and I can guarantee you there'll be plenty of PCIe 4.0 devices by that time.

I'd like to know where you are getting your pricing for these SSD. Nothing I've seen mention what planned pricing is. Furthermore, PCI-E 4.0 on an M.2 SSD isn't going to make a noticeable difference. We are hitting diminishing returns as it is. There is almost no noticeable difference between an x2 SSD and an x4 as it is, do you really think doubling the bandwidth again is going to make a sudden difference? No, it isn't. It will be nothing more than a marketing gimmick, with "rated" sequential reads that are huge, but actual performance that isn't really better than other PCI-E 3.0 drives.

Sure, you can make the argument that they can use PCI-E 4.0 x2 drives and only use 2 of the 16 lanes. But that also doesn't work. The reason being that motherboard manufacturers are not going to put x2 M.2 slots on their boards unless the absolutely have to(because they are out of PCI-E lanes). The reason being that most consumers are going to be putting PCI-E 3.0 drives in the slots, so they will be limited to PCI-E x2 3.0, and that looks bad on paper.

Sure, people will have these boards in use for 5+ years, but that is not what I mean by usable lifespan of the chipset. The usable lifespan of the chipset is how long manufacturers will be designing motherboards around this chipset. That lifespan is likely a year, maybe 2. And the fact is, in that time it is unlikely they will actually be designing boards with PCI-E 4.0 as the priority. They will still be designing boards assuming people will be using PCI-E 3.0 devices, because it doesn't make sense to design boards for use 3 years after it is bought.
 
Joined
Mar 6, 2018
Messages
133 (0.05/day)
I'd like to know where you are getting your pricing for these SSD. Nothing I've seen mention what planned pricing is. Furthermore, PCI-E 4.0 on an M.2 SSD isn't going to make a noticeable difference. We are hitting diminishing returns as it is. There is almost no noticeable difference between an x2 SSD and an x4 as it is, do you really think doubling the bandwidth again is going to make a sudden difference? No, it isn't. It will be nothing more than a marketing gimmick, with "rated" sequential reads that are huge, but actual performance that isn't really better than other PCI-E 3.0 drives.

Sure, you can make the argument that they can use PCI-E 4.0 x2 drives and only use 2 of the 16 lanes. But that also doesn't work. The reason being that motherboard manufacturers are not going to put x2 M.2 slots on their boards unless the absolutely have to(because they are out of PCI-E lanes). The reason being that most consumers are going to be putting PCI-E 3.0 drives in the slots, so they will be limited to PCI-E x2 3.0, and that looks bad on paper.

Sure, people will have these boards in use for 5+ years, but that is not what I mean by usable lifespan of the chipset. The usable lifespan of the chipset is how long manufacturers will be designing motherboards around this chipset. That lifespan is likely a year, maybe 2. And the fact is, in that time it is unlikely they will actually be designing boards with PCI-E 4.0 as the priority. They will still be designing boards assuming people will be using PCI-E 3.0 devices, because it doesn't make sense to design boards for use 3 years after it is bought.

 
Joined
Nov 21, 2010
Messages
2,353 (0.46/day)
Location
Right where I want to be
System Name Miami
Processor Ryzen 3800X
Motherboard Asus Crosshair VII Formula
Cooling Ek Velocity/ 2x 280mm Radiators/ Alphacool fullcover
Memory F4-3600C16Q-32GTZNC
Video Card(s) XFX 6900 XT Speedster 0
Storage 1TB WD M.2 SSD/ 2TB WD SN750/ 4TB WD Black HDD
Display(s) DELL AW3420DW / HP ZR24w
Case Lian Li O11 Dynamic XL
Audio Device(s) EVGA Nu Audio
Power Supply Seasonic Prime Gold 1000W+750W
Mouse Corsair Scimitar/Glorious Model O-
Keyboard Corsair K95 Platinum
Software Windows 10 Pro
So we should stop at PCIe 3.0 and call it a day? Motherboards have always been ahead of graphics cards, be it when VL-Bus, PCI, AGP or PCI Express came out.
It's kind of how it has to work. Obviously with PCIe, we haven't had to change the physical interface for a few generations, so it has been a lot easier than in the past to transition to a new, faster version. Pointless is a very strong word in this case and you also seem to have missed the fact that there will be PCIe 4.0 NVMe SSDs coming out soon, which will reap benefits from the faster interface. How useful the extra speed will be to most people is a different matter. Also, as I mentioned elsewhere, this will allow for a single PCIe lane on 10Gbps Ethernet cards which might make them more affordable and more common.

There's no reasoning with those who think like this gpus and add-on cards could have arrived with the spec before motherboards and they'd still have the gall to malign them because motherboards don't support the spec yet. Reason mbs tend to get new specs first is because typically their upgrade cycle is much longer than any other component in the system.
 
Joined
Feb 22, 2009
Messages
409 (0.07/day)
Location
Grand Prairie Texas
System Name Little Girl
Processor Intel Q9650 @ 3.6GHz
Motherboard Gigabyte x48 DQ6
Cooling liquid cooling
Memory 4gb (2x2) OCZ DDR2 PC2-9200
Video Card(s) Gigabyte HD6950 unlock to Asus 6970 specs
Storage Crucial CT128M225 128gb SSD
Display(s) Acer 27" LCD @ 2048x1152
Case DIY (spit & glue, ducktape, cardboard)
Audio Device(s) On-board HD Audio
Power Supply ABS Tagan 850w
Software Win7 64bit
AMD couldn't even compete for the crown in Gen3. Gen4 is not gonna save them.
 
Joined
Jan 17, 2006
Messages
932 (0.14/day)
Location
Ireland
System Name "Run of the mill" (except GPU)
Processor R9 3900X
Motherboard ASRock X470 Taich Ultimate
Cooling Cryorig (not recommended)
Memory 32GB (2 x 16GB) Team 3200 MT/s, CL14
Video Card(s) Radeon RX6900XT
Storage Samsung 970 Evo plus 1TB NVMe
Display(s) Samsung Q95T
Case Define R5
Audio Device(s) On board
Power Supply Seasonic Prime 1000W
Mouse Roccat Leadr
Keyboard K95 RGB
Software Windows 11 Pro x64, insider preview dev channel
Benchmark Scores #1 worldwide on 3D Mark 99, back in the (P133) days. :)
@newtekie1 It looks like these new NVMEs will have 900,000+ IOPS, that should definitely be noticeable.

It's the lower QD/small R/W tasks that are the reason we don't see much real-world difference in current products.

3x the IOPS will move things along nicely even if the b/w has only gone up 30%.

@my_name_is_earl What are you on about? AMD's platforms have more lanes than their competitor.

Maybe wait until Tuesday before you respond?
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I'd like to know where you are getting your pricing for these SSD. Nothing I've seen mention what planned pricing is. Furthermore, PCI-E 4.0 on an M.2 SSD isn't going to make a noticeable difference. We are hitting diminishing returns as it is. There is almost no noticeable difference between an x2 SSD and an x4 as it is, do you really think doubling the bandwidth again is going to make a sudden difference? No, it isn't. It will be nothing more than a marketing gimmick, with "rated" sequential reads that are huge, but actual performance that isn't really better than other PCI-E 3.0 drives.

Sure, you can make the argument that they can use PCI-E 4.0 x2 drives and only use 2 of the 16 lanes. But that also doesn't work. The reason being that motherboard manufacturers are not going to put x2 M.2 slots on their boards unless the absolutely have to(because they are out of PCI-E lanes). The reason being that most consumers are going to be putting PCI-E 3.0 drives in the slots, so they will be limited to PCI-E x2 3.0, and that looks bad on paper.

Sure, people will have these boards in use for 5+ years, but that is not what I mean by usable lifespan of the chipset. The usable lifespan of the chipset is how long manufacturers will be designing motherboards around this chipset. That lifespan is likely a year, maybe 2. And the fact is, in that time it is unlikely they will actually be designing boards with PCI-E 4.0 as the priority. They will still be designing boards assuming people will be using PCI-E 3.0 devices, because it doesn't make sense to design boards for use 3 years after it is bought.
Motherboard manufacturers are very fond of sharing lanes across slots/ports/devices, and it would be entirely possible for them to stuff a board to the gills with m.2 slots while making some of them switchable 2-lane setups. 2-lane PCIe 3.0 drives are already very popular, and while I agree that limiting a motherboard slot to PCIe 3.0 x2, more slots are always better. I for one would love if a board came with five m.2 slots - one 4.0 x4 from the CPU, two 4.0 x4 from the chipset, with both of these switchable separately to x2, enabling the last two slots at x2. That way you'd have all the SSD space you could possibly need, while maintaining performance. You could have three full-speed >8GBps (theoretical) drives, or three 3.0 x4 drives, or a mix of various types and interface widths. Have an old 3.0 x4 drive, an old 3.0 x2 drive, and a new 4.0 x2 drive? You'd still be able to fit in two more x2 drives or a single x4 drive. Yes, it would require users to RTFM, but that's already quite common with all the "m.2 slot 1 disables SATA_1 and SATA_0" etc. going on today.

As for pricing, there's nothing announced, but there is zero reason to expect them to drastically increase in price over today's most common drives. I'd be very surprised if they were as expensive as the 970 Pro, and prices are bound to drop quickly as more options arrive. An relatively minor interface upgrade for a commodity like an SSD is hardly grounds for a doubling in price.
 

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
28,473 (4.10/day)
Location
Indiana, USA
Processor Intel Core i7 10850K@5.2GHz
Motherboard AsRock Z470 Taichi
Cooling Corsair H115i Pro w/ Noctua NF-A14 Fans
Memory 32GB DDR4-3600
Video Card(s) RTX 2070 Super
Storage 500GB SX8200 Pro + 8TB with 1TB SSD Cache
Display(s) Acer Nitro VG280K 4K 28"
Case Fractal Design Define S
Audio Device(s) Onboard is good enough for me
Power Supply eVGA SuperNOVA 1000w G3
Software Windows 10 Pro x64

I'm not getting your point. Are you trying to prove that PCI-E 4.0 SSDs are coming? I never disputed that.

Motherboard manufacturers are very fond of sharing lanes across slots/ports/devices, and it would be entirely possible for them to stuff a board to the gills with m.2 slots while making some of them switchable 2-lane setups.

I wouldn't say any motherboard manufacturer is fond of sharing lanes. They do it because they have to. Heck the diagrams for x570 show that they are already doing that. The second M.2 slots in the first diagram is shared with the third PCI-E x16(electrically x4:rolleyes:) slot. So plugging in an additional M.2 drive disables the PCI-E slot.

My entire point is this shouldn't have to be a necessity. Give me enough lanes so that when I plug in an M.2 drive, my RAID card doesn't stop working.

2-lane PCIe 3.0 drives are already very popular

I wouldn't say they are popular. They are just a think that exists. And they exist because the drives themselves can't really use more than a x2 connection anyway. If you are saying those are very popular and hence what everyone is buying, then there really is no need for PCI-E 4.0 drives, as I said.

As for pricing, there's nothing announced, but there is zero reason to expect them to drastically increase in price over today's most common drives.

You don't have a good grasp on how the tech world works, do you? The latest bleeding edge technology, especially when it is the "fastest on the market" is never cheap. if they can throw a marketing gimmick in the specs that's new and faster, the price will be higher. Even if actual performance isn't. Hell, M.2 SATA drives have historically been more expensive than their 2.5" counterparts for the only reason being M.2 is "new and fancy" so they figure they can charge 5% more.
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I wouldn't say any motherboard manufacturer is fond of sharing lanes. They do it because they have to. Heck the diagrams for x570 show that they are already doing that. The second M.2 slots in the first diagram is shared with the third PCI-E x16(electrically x4:rolleyes:) slot. So plugging in an additional M.2 drive disables the PCI-E slot.
...so there's nothing stopping an implementation like the one I outlined then. Arguably, an extra m.2 slot will be more useful than a PCIe slot for most users.

My entire point is this shouldn't have to be a necessity. Give me enough lanes so that when I plug in an M.2 drive, my RAID card doesn't stop working.
Are you then prepared to pay >$250 for a mid-range motherboard with a 25W-ish chipset TDP? If so, you could probably get what you want. Or, you know, go HEDT. What you're asking for is a lot of the reason why HEDT motherboards are expensive - more PCB layers to accomodate more PCIe and memory channels. Mainstream platforms are for mainstream users, the vast majority of whom have no more than 1 GPU (and likely a GTX 1060 at best), 1 SSD - which might very well be SATA - and maybe an HDD. The 16 lanes off the chipset is plenty for even "mainstream enthusiasts", giving room for more m.2 SSDs, NICs and so on. And as always, you'll get x8/x8 SLI/CF.

Also, if your PC contains enough SSDs to require that last NVMe slot, and enough HDDs to require a RAID card, you should consider spinning your storage array out into a NAS or storage server. Then you won't have to waste lanes on a big GPU in that, making room for more controllers, SSDs and whatnot, while making your main PC less power hungry. Not keeping all your eggs in one basket is smart, particularly when it comes to storage. And again, if you can afford a RAID card and a bunch of NVMe SSDs, you can afford to set up a NAS.

I wouldn't say they are popular. They are just a think that exists. And they exist because the drives themselves can't really use more than a x2 connection anyway. If you are saying those are very popular and hence what everyone is buying, then there really is no need for PCI-E 4.0 drives, as I said.
They are popular, because they are cheap. They are cheap because the controllers are simpler than x4 drives - mostly in both internal lanes and external PCIe, but having a narrower PCIe interface is a significant cost savings, which won't go away when moving to 4.0 even if they double up on internal lanes to increase performance. In other words, unless PCIe 4.0 controllers are extremely complex to design and manufacture, a PCIe 4.0 x2 SSD controller will be cheaper than a similarly performing 3.0 x4 controller.

You don't have a good grasp on how the tech world works, do you? The latest bleeding edge technology, especially when it is the "fastest on the market" is never cheap. if they can throw a marketing gimmick in the specs that's new and faster, the price will be higher. Even if actual performance isn't. Hell, M.2 SATA drives have historically been more expensive than their 2.5" counterparts for the only reason being M.2 is "new and fancy" so they figure they can charge 5% more.
Phison and similar controller makers don't have the brand recognition or history of high-end performance to sell drives at proper "premium" NVMe prices - pretty much only Samsung does (outside of the enterprise/server space, that is, where prices are bonkers as always). Will they charge a premium for a 4.0 controller over a similar 3.0 one? Of course. But it won't be that much, as it wouldn't sell. Besides, even for the 970 Pro the flash is the main cost driver, not the controller. There's no doubt 4.0 drives will demand a premium, but as I said, I would be very surprised if they came close to the 970 Pro (which, for reference, is $100 more for 1TB compared to the Evo).
 

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
28,473 (4.10/day)
Location
Indiana, USA
Processor Intel Core i7 10850K@5.2GHz
Motherboard AsRock Z470 Taichi
Cooling Corsair H115i Pro w/ Noctua NF-A14 Fans
Memory 32GB DDR4-3600
Video Card(s) RTX 2070 Super
Storage 500GB SX8200 Pro + 8TB with 1TB SSD Cache
Display(s) Acer Nitro VG280K 4K 28"
Case Fractal Design Define S
Audio Device(s) Onboard is good enough for me
Power Supply eVGA SuperNOVA 1000w G3
Software Windows 10 Pro x64
...so there's nothing stopping an implementation like the one I outlined then. Arguably, an extra m.2 slot will be more useful than a PCIe slot for most users.

Of course there is nothing stopping it. But it drives up cost to add PCI-E switches and it still isn't ideal. Just outright having more PCI-E lanes available from the beginning is the better solution.

Are you then prepared to pay >$250 for a mid-range motherboard with a 25W-ish chipset TDP? If so, you could probably get what you want. Or, you know, go HEDT. What you're asking for is a lot of the reason why HEDT motherboards are expensive - more PCB layers to accomodate more PCIe and memory channels. Mainstream platforms are for mainstream users, the vast majority of whom have no more than 1 GPU (and likely a GTX 1060 at best), 1 SSD - which might very well be SATA - and maybe an HDD. The 16 lanes off the chipset is plenty for even "mainstream enthusiasts", giving room for more m.2 SSDs, NICs and so on. And as always, you'll get x8/x8 SLI/CF.

The number of GPUs has nothing to do with the discussion. The GPU gets it's lanes from the CPU, not the chipset. These downstream lanes off the chipset are what I'm talking about and there are already boards that are running out of them.

As for a 25w TDP, no that also is unreasonable and if it was that high then that is also a fault of AMD. The Z390 gives 24 downstream lanes and has a TDP of 6w, and it's also providing more I/O than the X570 would be. The fact is, thanks to AMD's better SoC style platform and the CPU doing a lot of the I/O that Intel still has to rely on the southbridge to handle, the X570 has a major advantage when it comes to TDP thanks to needing to do less work. And I'd also guess the high 15w TDP estimates of the X570 come down to the fact that they are using PCI-E 4.0.

So, again, at this point in time I would rather them put more PCI-E 3.0 lanes in and not bother with PCI-E 4.0 in the consumer chipset. The more lanes will allow better motherboard designs without the need for switching and port disabling. It would likely lower the TDP as well.

And the mainstream users are likely not using X570 either. They are likely going for the B series boards, so likely B550. They buy less expensive boards, with less extra features, that require less PCI-E lanes. But enthusiasts that buy X570 boards, expect those boards to be loaded with extra features, and most of those extras run off PCI-E lanes.

Phison and similar controller makers don't have the brand recognition or history of high-end performance to sell drives at proper "premium" NVMe prices - pretty much only Samsung does (outside of the enterprise/server space, that is, where prices are bonkers as always). Will they charge a premium for a 4.0 controller over a similar 3.0 one? Of course. But it won't be that much, as it wouldn't sell. Besides, even for the 970 Pro the flash is the main cost driver, not the controller. There's no doubt 4.0 drives will demand a premium, but as I said, I would be very surprised if they came close to the 970 Pro (which, for reference, is $100 more for 1TB compared to the Evo).

Phison isn't going to be selling drives to the consumer, they never have AFAIK. So it doesn't matter how well know they are to the consumer, they are very well known to the drive manufacturers. They sell the controllers to drive manufacturers, and the drive manufacturers sell the drives to consumers. Phison will charge more for their controller, and the drive manufactures will charge more for the end drives. They will charge more because the controller costs more, as well as the NAND to get actual higher rated speed costs more, and the have the marketing gimmick of PCI-E 4.0.
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Of course there is nothing stopping it. But it drives up cost to add PCI-E switches and it still isn't ideal. Just outright having more PCI-E lanes available from the beginning is the better solution.
Implementing switchable PCIe through the chipset is free, as the functionality is built in. The only thing driving up costs would be adding the required lanes and ports, which you're asking for more of, not less.
The number of GPUs has nothing to do with the discussion. The GPU gets it's lanes from the CPU, not the chipset. These downstream lanes off the chipset are what I'm talking about and there are already boards that are running out of them.
But PCIe lanes are PCIe lanes. If you need more than the 16 off the chipset, use the second x16 slot from the CPU. Your GPU will lose maybe 1% of performance at worst, and you'll get 8 more PCIe lanes to play around with. And again, if that 1% of performance is so important to you, buy an HEDT platform.

As for a 25w TDP, no that also is unreasonable and if it was that high then that is also a fault of AMD. The Z390 gives 24 downstream lanes and has a TDP of 6w, and it's also providing more I/O than the X570 would be. The fact is, thanks to AMD's better SoC style platform and the CPU doing a lot of the I/O that Intel still has to rely on the southbridge to handle, the X570 has a major advantage when it comes to TDP thanks to needing to do less work. And I'd also guess the high 15w TDP estimates of the X570 come down to the fact that they are using PCI-E 4.0.
Yes, the TDP is obviously due to PCIe 4.0 - higher frequencies means more power. That's a given. And 15W is perfectly fine (especially as it's only likely to pull that much power under heavy loads, which will be infrequent), but 25W would be problematic as you won't be able to cool that well passively without interfering with long AICs.

So, again, at this point in time I would rather them put more PCI-E 3.0 lanes in and not bother with PCI-E 4.0 in the consumer chipset. The more lanes will allow better motherboard designs without the need for switching and port disabling. It would likely lower the TDP as well.
Well, tough luck I guess. I'm more interested in a more future-proof platform, and I'm reasonably sure that I'll be more than happy with 16+16 PCIe 4.0 lanes. I'm more interested in the push for adoption of a newer, faster standard (which will inevitably lead to cheaper storage at 3.0 x4 speeds once the "new standard" premium wears off and 2-channel 4.0 controllers proliferate) than I am in stuffing dozens of devices into my PC.

And of course, yes, more 3.0 lanes would allow for more ports/slots/devices without the need for switching, and likely lower the TDP of the chipset. But it would also drive up motherboard costs as implementing all of those PCIe lanes will require more complex PCBs. The solution, as with cheaper Z3xx boards, will likely be that a lot of those lanes are left unused.

And the mainstream users are likely not using X570 either. They are likely going for the B series boards, so likely B550. They buy less expensive boards, with less extra features, that require less PCI-E lanes. But enthusiasts that buy X570 boards, expect those boards to be loaded with extra features, and most of those extras run off PCI-E lanes.
That's not quite true. Of course, it's possible that X570 will demand more of a premium than X470 or X370, and yes, there are a lot of people using Bx50 boards, but the vast majority of people on X3/470 are still very solidly in the "mainstream" category, and have relatively few PCIe devices.

Phison isn't going to be selling drives to the consumer, they never have AFAIK. So it doesn't matter how well know they are to the consumer, they are very well known to the drive manufacturers. They sell the controllers to drive manufacturers, and the drive manufacturers sell the drives to consumers. Phison will charge more for their controller, and the drive manufactures will charge more for the end drives. They will charge more because the controller costs more, as well as the NAND to get actual higher rated speed costs more, and the have the marketing gimmick of PCI-E 4.0.
No, they won't but they will be selling them to OEMs. Which OEMs? Not Samsung - which has the premium NVMe market cornered - and not WD, which is the current NVMe price/perf king. So they're left with brands with less stellar reputations, which means they'll be less able to sell products at ultra-premium prices, no matter the performance. Sure, some will try with exorbitant MSRPs, but prices inevitably drop once products hit the market. It's obvious that some will use PCIe 4.0 as a sales gimmick (with likely only QD>32 sequential reads exceeding PCIe 3.0 x4 speeds, if that), but in a couple of years the NVMe market is likely to have begun a wholesale transition to 4.0 with no real added cost. If AMD didn't move to 4.0 now, that move would happen an equivalent time after whenever 4.0 became available - in other words, we'd have to wait for a long time to get faster storage. The job of an interface is to provide plentiful performance for connected devices. PCIe 3.0 is reaching a point where it doesn't quite do that any more, so the move to 4.0 is sensible and timely. Again, it's obvious that there will be few devices available in the beginning, but every platform needs to start somewhere, and postponing the platform also means postponing everything else, which is a really bad plan.
 
Joined
Jan 17, 2006
Messages
932 (0.14/day)
Location
Ireland
System Name "Run of the mill" (except GPU)
Processor R9 3900X
Motherboard ASRock X470 Taich Ultimate
Cooling Cryorig (not recommended)
Memory 32GB (2 x 16GB) Team 3200 MT/s, CL14
Video Card(s) Radeon RX6900XT
Storage Samsung 970 Evo plus 1TB NVMe
Display(s) Samsung Q95T
Case Define R5
Audio Device(s) On board
Power Supply Seasonic Prime 1000W
Mouse Roccat Leadr
Keyboard K95 RGB
Software Windows 11 Pro x64, insider preview dev channel
Benchmark Scores #1 worldwide on 3D Mark 99, back in the (P133) days. :)
The other advantage is that the 2nd full-speed PCIe slot is just that, the chipset lanes are all sharing a single 4x (or similar DMI onm Intel) which could be a big bottleneck, it will not cope with 2 x M.2 PCI.4 at full speed so those are actually better off in a slot.

Given the speed of USB 3.1 versus most peripherals, a hub is probably better (again for those that need it) than putting a pile of USB on every motherboard.

Someone could in theory also build a monster i/o card giving out 2x the amount the chipset does (8x versus 4) for the few people that want more I/O without going to HEDT.

As you say, the loss from 16x to 8x for a GPU is currently very low even on 3.0
 

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
28,473 (4.10/day)
Location
Indiana, USA
Processor Intel Core i7 10850K@5.2GHz
Motherboard AsRock Z470 Taichi
Cooling Corsair H115i Pro w/ Noctua NF-A14 Fans
Memory 32GB DDR4-3600
Video Card(s) RTX 2070 Super
Storage 500GB SX8200 Pro + 8TB with 1TB SSD Cache
Display(s) Acer Nitro VG280K 4K 28"
Case Fractal Design Define S
Audio Device(s) Onboard is good enough for me
Power Supply eVGA SuperNOVA 1000w G3
Software Windows 10 Pro x64
Implementing switchable PCIe through the chipset is free, as the functionality is built in. The only thing driving up costs would be adding the required lanes and ports, which you're asking for more of, not less.

No, it's not. It requires extra components on the board and extra programming in the BIOS. Neither of which is free.

But PCIe lanes are PCIe lanes. If you need more than the 16 off the chipset, use the second x16 slot from the CPU. Your GPU will lose maybe 1% of performance at worst, and you'll get 8 more PCIe lanes to play around with. And again, if that 1% of performance is so important to you, buy an HEDT platform.

Except, AFAIK, that isn't allowed. The other 8 lanes from the CPU have to be wired to a PCI-E slot. AMD doesn't allow you to use them as general purpose lanes. And, as much as you and I know that dropping the primary GPU down to x8 doesn't really affect performance, no one wants a motherboard that just always runs the single GPU at x8. Just look at how many threads we get here of people freaking out because their GPU isn't running at x16.

Yes, the TDP is obviously due to PCIe 4.0 - higher frequencies means more power. That's a given. And 15W is perfectly fine (especially as it's only likely to pull that much power under heavy loads, which will be infrequent), but 25W would be problematic as you won't be able to cool that well passively without interfering with long AICs.

Well, tough luck I guess. I'm more interested in a more future-proof platform, and I'm reasonably sure that I'll be more than happy with 16+16 PCIe 4.0 lanes. I'm more interested in the push for adoption of a newer, faster standard (which will inevitably lead to cheaper storage at 3.0 x4 speeds once the "new standard" premium wears off and 2-channel 4.0 controllers proliferate) than I am in stuffing dozens of devices into my PC.

And of course, yes, more 3.0 lanes would allow for more ports/slots/devices without the need for switching, and likely lower the TDP of the chipset. But it would also drive up motherboard costs as implementing all of those PCIe lanes will require more complex PCBs. The solution, as with cheaper Z3xx boards, will likely be that a lot of those lanes are left unused.

I'm not complaining about 15w, I'm countering the point that adding more lanes would cause the chipset to use 25w or whatever. That would not be the case with more PCI-E 3.0 lanes, and that is my point. I'd rather have a 15w chipset with 32 PCI-E 3.0 lanes than 16 PCI-E 4.0 lanes.

The lanes from the CPU are already PCI-E 4.0. That covers your GPUs and a high end PCI-E 4.0 M.2 SSD when you want to upgrade to one in the future. Make the chipset put out PCI-E 3.0 lanes and give more flexibility with more lanes. You're still getting your futureproofing, you're still getting your adoption of a new standard, and you also get more flexibility to add more components to the motherboard without forcing the consumer to make a decision between what they want to use.

No, they won't but they will be selling them to OEMs. Which OEMs? Not Samsung - which has the premium NVMe market cornered - and not WD, which is the current NVMe price/perf king. So they're left with brands with less stellar reputations, which means they'll be less able to sell products at ultra-premium prices, no matter the performance. Sure, some will try with exorbitant MSRPs, but prices inevitably drop once products hit the market. It's obvious that some will use PCIe 4.0 as a sales gimmick (with likely only QD>32 sequential reads exceeding PCIe 3.0 x4 speeds, if that), but in a couple of years the NVMe market is likely to have begun a wholesale transition to 4.0 with no real added cost. If AMD didn't move to 4.0 now, that move would happen an equivalent time after whenever 4.0 became available - in other words, we'd have to wait for a long time to get faster storage. The job of an interface is to provide plentiful performance for connected devices. PCIe 3.0 is reaching a point where it doesn't quite do that any more, so the move to 4.0 is sensible and timely. Again, it's obvious that there will be few devices available in the beginning, but every platform needs to start somewhere, and postponing the platform also means postponing everything else, which is a really bad plan.


So, what you're saying, is the only PCI-E 4.0 NVMe SSD controller we've seen so far, won't be used by either of the two biggest well know SSD manufacturers(it won't likely be used by Micron either so that's actually the 3 biggest SSD manufacturers). Yeah, those PCI-E 4.0 controller are ready to go mainstream I tell ya!

And, like I said, it isn't like the platform wouldn't have a PCI-E 4.0 M.2 slot for the future anyway. Remember, I'm not arguing to completely get rid of PCI-E 4.0, the CPU would still be putting out PCI-E 4.0 lanes. So there would still be a slot available when the time comes that you actually want to buy an PCI-E 4.0 M.2.
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
No, it's not. It requires extra components on the board and extra programming in the BIOS. Neither of which is free.
At worst it requires some very minor components to switch the lanes from one path to another. The few cents those cost are nothing compared to the price of a couple of extra PCB layers.

Except, AFAIK, that isn't allowed. The other 8 lanes from the CPU have to be wired to a PCI-E slot. AMD doesn't allow you to use them as general purpose lanes. And, as much as you and I know that dropping the primary GPU down to x8 doesn't really affect performance, no one wants a motherboard that just always runs the single GPU at x8. Just look at how many threads we get here of people freaking out because their GPU isn't running at x16.
You should tell that to all the SFF fans using bifurcated risers from the x16 slot on their ITX boards to run SSDs alongside their GPUs, or other PCIe AICs like 10GbE NICs. Heck, a few motherboards even support "trifurcation" into x8+x4+x4 with a suitable riser. They're not advertised as general purpose lanes, but if you connect something to them, they work. PCIe is PCIe. The general rule for any x16/x8+x8 motherboard is that your GPU will get half the bandwidth if you're not careful where you stick your WiFi card or whatever else you want to install. It's always been that way.

I'm not complaining about 15w, I'm countering the point that adding more lanes would cause the chipset to use 25w or whatever. That would not be the case with more PCI-E 3.0 lanes, and that is my point. I'd rather have a 15w chipset with 32 PCI-E 3.0 lanes than 16 PCI-E 4.0 lanes.
Again: this puts you in a tiny minority among MSDT users. Most will never, ever come close to using all their PCIe lanes. Again: you seem to be a personification of the target group for HEDT systems - someone wanting boatloads of PCIe. And what you want would drive up motherboard prices for everyone else for no good reason. PCIe traces are complicated and expensive to implement.

The lanes from the CPU are already PCI-E 4.0. That covers your GPUs and a high end PCI-E 4.0 M.2 SSD when you want to upgrade to one in the future. Make the chipset put out PCI-E 3.0 lanes and give more flexibility with more lanes. You're still getting your futureproofing, you're still getting your adoption of a new standard, and you also get more flexibility to add more components to the motherboard without forcing the consumer to make a decision between what they want to use.
There's no future proofing if all the output lanes are 3.0. Want to add a 4.0 x1 10GbE NIC in a couple of years? Yeah, sorry, it'll run at half speed. Want a TB4 controller when those come out? Or a USB 3.2G2x2 (or whatever the ¤%!&@! it's called) controller that doesn't eat a full four lanes? Sorry, no can do. I agree that a chipset with a 3.0 switch but a 4.0 uplink is far better than 3.0 all around, but given that PCs bought today are likely to be in service for the better part of the next decade, not wanting to future-proof the I/O with the fastest possible standards is rather silly.

So, what you're saying, is the only PCI-E 4.0 NVMe SSD controller we've seen so far, won't be used by either of the two biggest well know SSD manufacturers(it won't likely be used by Micron either so that's actually the 3 biggest SSD manufacturers). Yeah, those PCI-E 4.0 controller are ready to go mainstream I tell ya!
Okay, you seem not to be actually reading. Have I said that there's a crapton of 4.0 SSDs around the corner? No. I've said - quite explicitly - that a key thing is to get 4.0-capable platforms out the gate so that component manufacturers get off their asses and start making products. And, as we've seen with SSDs, they will. Why do you think Samsung has been holding off on launching a 980 series? There's no way that's coming out without PCIe 4.0 support. And as always with new I/O standards, it'll take time for it to become common, so we have to get it going now if we want this to be available in 2-3 years rather than 4-5. If there weren't platforms coming for it, there wouldn't be PCIe 4.0 devices in production now either. This is why it's great for all PC enthusiasts that AMD is making this push at this exact time - the timing is very, very good.[/QUOTE]
 
Joined
Jan 15, 2015
Messages
362 (0.10/day)
15 watts is nothing if there's an adequate heatsink. X58 and 990FX were both above 20W and didn't have active cooling. This is just lazy design on the motherboard manufacturer's behalf.
You're forgetting that they have to sacrifice some things, in terms of quality, to pay for the plastic shrouds, brand logos, and rainbow LEDs.

People talk about how primitive the fans are but it's not like tower VRM coolers are a new thing. Nor are boards with copper highly-finned coolers. But, clearly, we are advancing as an industry because rainbow LEDs, plastic shrouds, and false phase count claims, are where it's at.

I wonder if even one of the board sellers are going to bring feature parity between AMD and Intel. Intel boards for quad CPUs were given coolers that could be hooked up to a loop. AMD buyers, despite having Piledriver to power, were given the innovation of tiny fans. Yes, folks, the tiny fan innovation for AMD was most recently seen in the near-EOL AM3+ boards. Meanwhile, only Intel buyers were considered serious enough to have the option of making use of their loops without having to pay through the nose for an EK-style solution. (I'm trying to remember the year the first hybrid VRM cooler was sold to Intel buyers. 2011? There was some controversy over it being anodized aluminum but ASUS claimed it was safe. Nevertheless, it switched to copper shortly after. I believe Gigabyte sold hybrid-cooled boards as well, for Intel systems. The inclusion of hybrid cooling was not a one-off. It was multigenerational and expanded from ASUS to Gigabyte.)

MSI's person said no one wanted this but ASUS, at least, thought the return of the tiny fan was innovative.
 
Last edited:
Top