• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

HighPoint Rocket 1608A 8-Slot M.2 Gen 5

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
28,886 (3.74/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
The HighPoint Rocket 1608A is a PCIe Gen 5 controller card that lets you add up to eight Gen 5 drives. Performance is truly incredible, we were able to measure raw sequential speeds of over 56 GB/s in our review.

Show full review
 
I suppose if you were editing 8K or larger this would fit a workstation very well. Not seeing much use for it outside that since random performance isn't scaling. Is there any CPU overhead?
 
As impractical as this is for most users I do love seeing these reviews.
 
I suppose if you were editing 8K or larger this would fit a workstation very well.
I seriously doubt it would make any kind of significant difference even for something like that over a single PCIe gen 5 drive.
 
the fans run as high as they do even with no drives installed or any real activity is because the people that designed this board KNOW that if the fan isnt as high as it is right off the bat and you have all those drives in there just waiting to go, that whole board will melt, they don't want these EXCESS rmas so they played it safe and set aggressive fan profile.
 
Just put tripple slot cooling like normal GPU and call it a day :roll:
 
$1500 and only 2year warranty? :eek:

on a side note I think the fan there is a little pointless... Either remove it or put a shroud on it like on a GPu so the cool air gets channelled down along the fins of the heatsink instead of everywhere and anywhere.
 
At that insane pricetag it would be well worth considering buying a pair of relatively cheap 4x M2 to PCIe g5 x4 for £88 each (ASUS Hyper M.2 PCIe x16 Gen 5 NVMe Card with 4x M.2 Slots) and putting that extra £1,300 towards something else for a system that wants that many PCIe g5 M2 slots as possible but doesn't have the card slots / PCIe lanes available to go with the far cheaper option, so likely not using a Threadripper platform, this is a pretty nice use case. Otherwise this will end up in a server and so the price wont be an issue if the product is considered to be reliable and it does the job.
 
At that insane pricetag it would be well worth considering buying a pair of relatively cheap 4x M2 to PCIe g5 x4 for £88 each (ASUS Hyper M.2 PCIe x16 Gen 5 NVMe Card with 4x M.2 Slots) and putting that extra £1,300 towards something else for a system that wants that many PCIe g5 M2 slots as possible but doesn't have the card slots / PCIe lanes available to go with the far cheaper option, so likely not using a Threadripper platform, this is a pretty nice use case. Otherwise this will end up in a server and so the price wont be an issue if the product is considered to be reliable and it does the job.
For me this only makes sense for TR boards. If you put this in a X670E it will eat all the PCIe lanes attached. The cost is in line with any other adapter card that comes with a controller. If you want something like this on AM5 you would need a WD AN1500 drive as that is the cheapest adapter card with a controller. It is only PCIE 3.0 but you can put whatever you want in that. I bought the 1TB version and upgraded it to 4TB with 2 $149.99 2TB drives,
 
For me this only makes sense for TR boards.
That's kind of what I thought as well, but TR boards have lots of slots and so using 2x cheap cards to do the same job at the same performance and saving £1,300 is very enticing, not least because it could easily mean the difference between getting 4x 2TB drives or 4x 4TB drives.
 
Are there more products similar to this one in function for a significantly lower price that you'd recommend? Like even just with 3-4x Gen 5 spots instead of 8.
 
Are there more products similar to this one in function for a significantly lower price that you'd recommend? Like even just with 3-4x Gen 5 spots instead of 8.
ASUS Hyper M.2 PCIe x16 Gen 5 NVMe Card with 4x M.2 Slots.

It's cheap because it simply provides 4x4 PCIe lanes, and cooling on a single smallish single slot card. They also do (cheaper) PCIe 4 ones as well, and I have a PCIe 3 version of the card which you might be able to find for even cheaper, and depending on need.

You will need to make sure that the PCIe slot that you intend to use is correctly setup in the BIOS if it is not automatic, check with ASUS for the newer versions but the PCIe 3 version needs the BIOS to be set to 4x4x4x4 (or similar), then the OS simply sees all devices installed exactly as though they had been plugged directly into the motherboards very own M2 slots, any number of M2 drives can be used from 1-4, in any mix of capacity, make, model, and PCIe version and then everything is handled by the OS.

There are other products out there, I know that ASrock have one, and MSI has one that supports at least 2 M2 drives, and FYI, they (should) all be drop in compatible with any board, my g3 ASUS Hyper card worked just fine in an ASrock board. No doubt there are other brands out there, but I only have experience with the ASUS Hyper g3 version. Good Luck.
 
the fans run as high as they do even with no drives installed or any real activity is because the people that designed this board KNOW that if the fan isnt as high as it is right off the bat and you have all those drives in there just waiting to go, that whole board will melt, they don't want these EXCESS rmas so they played it safe and set aggressive fan profile.
No, the board will not melt. That's just silly.

Temperature sensors are a thing.
 
Really need this for work with 64TB maxed as swap drive, however the non sequential performance is too low.

Anyone has experience with the Rocket 7608A? Is it better in random IO (don’t care for the raid options)?

or a Gen5 AIC with bifurcation using software raid 0 on a threadriper?

The price tag is acceptable when compared with server configs with 24 terabytes of RAM. If the performance drop is not too massive though.
 
Almost spat my coffee when I saw the pricing. Is the controller chip that expensive as there's practically nothing else on the PCB?
 
Very detailed and thorough review. Wonder what the problem with Intel boards is, strange.Interesting though very, very niche use case product given the enormous cost.

Surprised they haven't tried to find something related to AI as a sales hook.
 
Anyone has experience with the Rocket 7608A? Is it better in random IO (don’t care for the raid options)?
Not yet. Seems to be the same hardware platform. Maybe they’ll give me a sample


Almost spat my coffee when I saw the pricing. Is the controller chip that expensive as there's practically nothing else on the PCB?
yeah broadcom wants a lot of money for that chip, my guess is like 1k
 
You will need to make sure that the PCIe slot that you intend to use is correctly setup in the BIOS if it is not automatic, check with ASUS for the newer versions but the PCIe 3 version needs the BIOS to be set to 4x4x4x4 (or similar), then the OS simply sees all devices installed exactly as though they had been plugged directly into the motherboards very own M2 slots, any number of M2 drives can be used from 1-4, in any mix of capacity, make, model, and PCIe version and then everything is handled by the OS.
Yes, the ability to split x16 into x4+x4+x4+x4 has been documented by Asus, probably by other board makers too, but never by AMD itself. In that regard it's similar to ECC support.
AMD may remove this ability in the future with no advance notice. Intel did that too, Core CPUs used to be more flexible before Alder Lake (it was x8+x4+x4), now it's just x8+x8.

The usefulness of this Rocket card is also questionable, or at least very limited, in a Threadripper, Epyc or Xeon system. All of them have great bifurcation abilities on the CPU itself, which is also documented. AMD chips can split each PCIe Gen5 x16 bus into as many as nine Gen5 links. Intel chips can split a Gen5 x16 bus into eight links at the most, but in that case, the links become Gen4.
 
Very detailed and thorough review. Wonder what the problem with Intel boards is, strange.Interesting though very, very niche use case product given the enormous cost.

Surprised they haven't tried to find something related to AI as a sales hook.

Machine learning is definitely a possible application if the random performance approached the sequential.

Cost is relative: a single 256GB RDIMM costs above 1500$ and you need 96 of them to get 24TB of RAM. Not to mention the cost of the motherboards/cpu that can take that configuration… AWS and Azure are not cheap either when you spec 24TB.

If this card allows to use less RAM and very fast swap it can drop the cost by orders of magnitude. Depending on the application this tiered memory might be good enough for some machine learning applications that can not be distributed across machines

I am dealing with exactly this problem and have been reading spec sheets and Linux documentation.

The benchmarks were truly a great help.
 
If this card allows to use less RAM and very fast swap it can drop the cost by orders of magnitude. Depending on the application this tiered memory might be good enough for some machine learning applications that can not be distributed across machines
No matter how magnificent it looks, it has a bandwidth of a single DDR5-7200 stick of RAM at best, and a limited write endurance. A pretty bad RAM extension regardless of the purpose.
 
Yes, the ability to split x16 into x4+x4+x4+x4 has been documented by Asus, probably by other board makers too, but never by AMD itself. In that regard it's similar to ECC support.
AMD may remove this ability in the future with no advance notice. Intel did that too, Core CPUs used to be more flexible before Alder Lake (it was x8+x4+x4), now it's just x8+x8.
Perhaps worry about this "if" it were ever to happen, which would have to happen via a BIOS update. Mine has been fine for years, albeit the BIOS hasn't needed to be updated in years and should I require a BIOS update to upgrade a CPU I will take this into consideration, thanks for the warning.!
 
Not yet. Seems to be the same hardware platform. Maybe they’ll give me a sample

Thank you for the review.
Will love to see if there is a performance difference in raid 0 and perhaps a card with bifurcation at some point in the future :)

No matter how magnificent it looks, it has a bandwidth of a single DDR5-7200 stick of RAM at best, and a limited write endurance. A pretty bad RAM extension regardless of the purpose.

True 12 channel DDR5 blows it out of the water in everything except cost. And that is outside the current budget so I am trying to cobble up a solution.

The application loads the dataset and then runs computation on it by reads for multiple days. So write endurance is not a big .

If only Linux could use multiple swaps concurrently for better read speed. I would have been able to configure a threadripper with multiple cards like this and boost performance.

I will probably have to try the card anyways to test if it can perform sufficiently for the job.

Note for however cares: I just learned that FreeBSD might be able to do this (use multiple swap devices in parallel). This might work after all
 
Machine learning is definitely a possible application if the random performance approached the sequential.

Cost is relative: a single 256GB RDIMM costs above 1500$ and you need 96 of them to get 24TB of RAM. Not to mention the cost of the motherboards/cpu that can take that configuration… AWS and Azure are not cheap either when you spec 24TB.

If this card allows to use less RAM and very fast swap it can drop the cost by orders of magnitude. Depending on the application this tiered memory might be good enough for some machine learning applications that can not be distributed across machines

I am dealing with exactly this problem and have been reading spec sheets and Linux documentation.

The benchmarks were truly a great help.
To my knowledge, the best option out there for "affordable" low latency mass storage is still Intel Optane, but a new entrant has come along and might be a viable potential option for you to consider for your needs, the Solidigm D7-P5810 is nearing Optane levels of latency but having just taken a quick look at the little video talking about it (a level1techs YT video) it has insane Write indurance, so perhaps it's the Write latency that they have made to be excellent and the read is unaffected.

Interestingly this shows that SSD manufacturers are manufacturing specialised models of SSD's for certain purposes (such as the one above) whilst using standard flash chips and changing the firmware to use those flash chips in a different way, this might mean that there is the product for you out there even if it is not the one i noted above. Good Luck.
 
Last edited:
To my knowledge, the best option out there for "affordable" low latency mass storage is still Intel Optane, but a new entrant has come along and is looking like a viable potential option for you to consider for your needs, the Solidigm D7-P5810, there are likely to be others like this from other manufacturers that use this technique to get this performance (of course with sacrifices).

Thank you for the suggestion. I am looking at the Solidigm right now and will explore product the category.

Optane would have been a great solution but sadly discontinued.
 
Back
Top