Sunday, June 4th 2023
Innodisk at Computex 2023: Has the Right Idea About Gen 5 SSDs, to Make them AICs
Innodisk has the right idea about how to do PCIe Gen 5 NVMe SSDs—to ditch the M.2 form-factor, and make them PCIe add-in cards. This would remove the need for cartoonishly disproportionate cooling solutions with high-pitched 20 mm fans; and rather allow SSD designers to use cooling solutions resembling those of graphics cards. Gen 5 NVMe controllers have a TDP of around 15 W, or roughly similar to that of a motherboard chipset. The M.2-2280 form-factor is tiny for the deployment of a sufficiently large heatsink, and so SSD designers are resorting to active cooling, using 20 mm fans that don't sound pleasant. Most single-slot VGA cooling solutions can make short work of 15 W of heat while being much quieter, some even fanless.
The Innodisk 5TG-P AIC SSD uses a PCIe Gen 5 + NVMe 2.0 SSD controller with a large passive heatsink, a PCI-Express 5.0 x4 host interface, and 32 TB of capacity. The drive runs entirely on slot power, and besides the 3D TLC NAND flash, uses a large DDR4 DRAM cache. The company claims sequential transfer speeds of up to 13 GB/s in either direction. Innodisk is targeting the PCIe 5TG-P at workstation and HEDT use-cases. The company is building them in server-relevant form-factors such as U.2 and E.1S. A CDM screenshot shows 13.62 GB/s sequential reads, with 11.55 GB/s sequential writes.The nanoSSD PCIe 4TE3 is a single-chip BGA SSD that takes in PCI-Express 4.0 x4 host interface, offers capacities ranging between 128 GB to 1 TB, and sequential transfer speeds of up to 3.6 GB/s reads, with up to 3.2 GB/s writes. The P80 4TG2-P is a client-segment M.2-2280 SSD with a PCI-Express 4.0 x4 interface. The drive has been tested for PlayStation 5 compatibility, and comes in capacities ranging between 512 GB and 4 TB. The drive offers sequential transfer rates of up to 7.1 GB/s reads, with up to 5.8 GB/s writes.The P42 4TE2 is an M.2-2242 SSD targeted at the OEM and SI markets, it uses a DRAMless Gen 4 controller, and comes in capacities ranging between 128 GB and 2 TB. The drive offers speeds of up to 4 GB/s reads, with up to 3.4 GB/s writes. The P110 4TG2-P iCell is an M.2-22110 SSD with a Gen 4 interface. It uses its extra length to deploy a bank of capacitors, which gives it power-loss protection. The drive comes in 512 GB thru 4 TB capacities, and offers sequential speeds of up to 7.5 GB/s reads, with up to 5.3 GB/s writes. The P80 4TE2 iCell is a miniaturized version of this in the popular M.2-2280 form-factor, but it uses a compact DRAMless architecture to free up PCB real-estate for the capacitor bank.Innodisk also showed off an assortment of DDR5 DRAM products in standard UDIMM, R-DIMM, and SO-DIMM form-factors, with their applications ranging between PCs to servers. There are some stand-out products, such as the Ultra Temperature class of DDR4 and DDR5 SO-DIMMs, which are meant for outdoor, embedded, industrial, and automotive applications. These SO-DIMMs feature an extreme operational temperature range of -40°C to 125°C, and come in densities ranging between 8 GB single-rank to 32 GB dual-rank.
The Innodisk 5TG-P AIC SSD uses a PCIe Gen 5 + NVMe 2.0 SSD controller with a large passive heatsink, a PCI-Express 5.0 x4 host interface, and 32 TB of capacity. The drive runs entirely on slot power, and besides the 3D TLC NAND flash, uses a large DDR4 DRAM cache. The company claims sequential transfer speeds of up to 13 GB/s in either direction. Innodisk is targeting the PCIe 5TG-P at workstation and HEDT use-cases. The company is building them in server-relevant form-factors such as U.2 and E.1S. A CDM screenshot shows 13.62 GB/s sequential reads, with 11.55 GB/s sequential writes.The nanoSSD PCIe 4TE3 is a single-chip BGA SSD that takes in PCI-Express 4.0 x4 host interface, offers capacities ranging between 128 GB to 1 TB, and sequential transfer speeds of up to 3.6 GB/s reads, with up to 3.2 GB/s writes. The P80 4TG2-P is a client-segment M.2-2280 SSD with a PCI-Express 4.0 x4 interface. The drive has been tested for PlayStation 5 compatibility, and comes in capacities ranging between 512 GB and 4 TB. The drive offers sequential transfer rates of up to 7.1 GB/s reads, with up to 5.8 GB/s writes.The P42 4TE2 is an M.2-2242 SSD targeted at the OEM and SI markets, it uses a DRAMless Gen 4 controller, and comes in capacities ranging between 128 GB and 2 TB. The drive offers speeds of up to 4 GB/s reads, with up to 3.4 GB/s writes. The P110 4TG2-P iCell is an M.2-22110 SSD with a Gen 4 interface. It uses its extra length to deploy a bank of capacitors, which gives it power-loss protection. The drive comes in 512 GB thru 4 TB capacities, and offers sequential speeds of up to 7.5 GB/s reads, with up to 5.3 GB/s writes. The P80 4TE2 iCell is a miniaturized version of this in the popular M.2-2280 form-factor, but it uses a compact DRAMless architecture to free up PCB real-estate for the capacitor bank.Innodisk also showed off an assortment of DDR5 DRAM products in standard UDIMM, R-DIMM, and SO-DIMM form-factors, with their applications ranging between PCs to servers. There are some stand-out products, such as the Ultra Temperature class of DDR4 and DDR5 SO-DIMMs, which are meant for outdoor, embedded, industrial, and automotive applications. These SO-DIMMs feature an extreme operational temperature range of -40°C to 125°C, and come in densities ranging between 8 GB single-rank to 32 GB dual-rank.
28 Comments on Innodisk at Computex 2023: Has the Right Idea About Gen 5 SSDs, to Make them AICs
Yet I've never bought one.
If you look at PCIe 4.0 solutions, not a single one of the AICs has a reasonable price.
Days of, for example, Sabertooth 990FX r3.0: single M.2, not fast enough to need a heatsink, certainly not an elaborate one.
Now motherboards will have 4-6 M.2 slots, elaborate heatsinks and in the case of ASUS things like DIMM.2 . And only 2-4 PCIe slots, 16x 4x and maybe 1x 1x if you're lucky.
M.2 drives also require heatsinks, now quite sizable ones. PCIe AIC can have better thermal dissipation and capacity than most M.2 heatsinks. Without needing fans.
Then, motherboards can have more physical PCIe slots, so anyone not needing 6 M.2 drives can use PCIe for other things (or not at all, but the potential function remains).
M.2 is really a laptop design adopted to desktop.
Its still frustrating though that when paying for storage, some of the cost is going to research for these insane sequential speeds that still dont really have a use case. The benefits of random i/o is more meaningful and those dont need a higher spec'd PCIe bus. I think if there was a mass rush to move to PCIe SSDs, the board manufacturers would just adapt again, which is in their favour as invalidating previous purchase decisions is a great way to obsolete a previous product.
Although I do think a lot of consumers seems to be satisfied with gen 4 being "good enough". Which would probably stem the demand.
My next board is 3 M.2 and 5 PCIe. Albeit only 3 of those PCIe with at least 4 lanes and one used by GPU. So effectively combined capacity for 5 NVME drives. I think thats a better config than one less PCIe in trade for one more M.2.
My current board is for practical purposes just one M.2 (the other is only 2 lanes) and it removes 2 SATA ports when used. However I can use PCIE x4 slots for NVME if needed, which dont disable any SATA ports.
Besides, as you pointed out, laptops will continue to use M.2 (and some handhelds like Steam Deck, etc.). It would not be economical to have M.2 for laptops only, and get rid of it for the desktop market completely.
Plus, what's the likelihood of a mobo manufacturer taking the only x4 5.0 lanes from the CPU (meant for storage) and wiring them to an x4 slot INSTEAD of an m.2 slot? In enterprise, Prosumer/HEDT this is not an issue obviously, but for consumer platforms it would be, plus there's the usual inertia that'll have to be overcome, after all, mobo makers seem to just love adding more and more m.2 slots....has anyone here actually needed 4 or 5 m.2 slots? Rather they just give me 3 slots and cut $50 off the price.
I have servers that hammer datacentre-grade MLC drives all day and those need to be AIC for cooling reasons, but for consumer desktop OSes there's literally no use case for them yet. DirectStorage was lauded as the reason we'd need PCIe 5.0 SSDs in the future, but we're still waiting. So far we've had one game and DirectStorage made no appreciable difference. When the majority of new games are using DirectStorage, perhaps PCIe 5.0 SSDs will be a more compelling buy for consumers. Until then, you can blind test PCIe 3.0 x2 and PCIe 5.0 x 4 drives and in 99% of the usage examples, nobody is going to know the difference.
I use a pcie4 7.68tb u.2 drive through a m2 to u2 adapter and it us just perfect.
This validates all of my concerns about the M.2 Format. Heat issues.
Again, I already have a WD SN850 and I did not like the heat, not only being generated by the the stick/heatsink but it overall increased the heat on the rig. It was not much but I run a very cool rig that uses smaller amounts of wattage to gain my results. Because energy cost money now these days.
As stated before I am going to be using my sticks in an external setup as I can disperse the heat outside of the case.
We already had similar products going back years (Ram Disk Anyone) and yet NOW we are just seeing someone in a company decide to make these types of cards (which by the way I would buy one) now????
Overall, this is just sloppy, lazy @$$ed thinking in the higher companies in the tech industry. Man they had to know there was going to be heat issues when cranking up that type of performance.
Again, IHMO this is the talking heads of market saying... Wow... lets increase our profit margins by making.... Heat sinks!!!... But not only just heat sinks... BIG heat sinks with FANS!!!... Brilliant!!
Then paint it white and pink!!!... Place a Hello Kitty/Flavor of the month trend and we can make MOAR money off of the idio... "ahem" customers.
I am glad at least one company is making this type of card. I just wished this was done a few years ago because the technology was definitely there for awhile.
This trend of slapping 3 or 4 m.2 slots on motherboards was always pretty silly, probably a way to save money vs using u.2 or pcie slots (they could even sell the AIC adapters for m.2 later but whatever), it seems we're going full circle indeed as chrcoluk said. Just like they adapted to putting 4 m.2 slots on boards, they can adapt back to putting more universal pcie slots Hmm I very much doubt you were able to measure any meaningfull difference in temperature or power usage just because of the SSD, the drive is rated at 9W when going full bore (anandtech measured 7-8W) - but for the ssd to go full bore the rest of the system is also doing something - and less than 1W during regular desktop use.
Good luck with moving the ssds outside the case, i'm sure the cost in the m.2/pcie extensions and probable speed degradation will be worth the ~0.1ºC decrease in temperature inside the case :D
There are currently practically no benefits of buying ultra fast PCIe Gen 5 NVMe SSD, aside from benchmarking. There were very few user cases where PCIe Gen 2, 3 or 4 showed any noticeable speedup compared to SATA drives of any task the user actually performs, aside from maybe copying from one such SSD to another - which mainly just shows how much of that SSDs speed is just a small buffer and how slow it gets from then on.
DirectStorage is appatently dead in water, too diverse hardware to actually implement into games to actually benefit from it - PC's are unfortunately not gaming consoles?
So why would ditching the small format M. 2 SSDs (let's say PCIe Gen 4, with small thermal requirements) to large PCIe Gen 5 add-in cards actually bring us, in terms of user experience? Any faster Windows or application startup times? Shorter game loading times? Actually noticeable speedups or just a bit shorter bars in benchmarking?
Is that really "the right idea about how to do PCIe Gen 5 NVMe SSDs" then?
Maybe in a file server, but in a home PC, definitely not!
Forspoken DirectStorage on test
Apparently it takes one minute instead of couple of seconds to load the scene on HDD, but when it does, it runs faster?
Issues like this prevent the widespread use, and apparently no one wants to be a Guinea pig with new untested features. So we have the hardware, we have all the underlaying support in OS, and game developers that don't want to use the features.
Power efficiency, manufacturing optimisations for lower-cost models, and all of the bugfixes, tweaks, and lessons learned will likely come in the next-gen at the earliest.