• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

HighPoint Rocket 1608A 8-Slot M.2 Gen 5

Thank you for the suggestion. I am looking at the Solidigm right now and will explore product the category.

Optane would have been a great solution but sadly discontinued.
Solidigm is what Intel's Optane division was.
 
Not quite pertinent, but I think it's interesting how it's the second adaptor of this kind that included "Rocket" in its name, or those of relevant products.

Considering how few of these are actually coming to the market, it's quite a...poverty of words for things that go fast. :p
 
Those who really need this won't satisficed by the pathetic 2 year warranty.
 
Yeah so the reason this doesn't work on Intel platforms is actually because on LGA1700, Intel moved away from x8+x4+x4 to x8+x8. Not sure WHY they did this, but LGA1151 and LGA1200 had support for x8+x4+x4. As to why they can't do x4+x4+x4+x4, I believe it has to do with Intel limiting that feature to their HEDT and server platforms, as those platforms actually do support x4+x4+x4+x4. Or maybe they thought most power users would be using a graphics card, so have the x8 on the first slot, don't spend extra engineering on splitting the first x8 down to x4+x4, and there could be cost savings there, but this is also why on mainstream Intel platforms, you could only use 3 out of the 4 m.2 slots on the hyper m.2 PCIe x16 cards for example. AMD actually supports x4+x4+x4+x4 on even their cheapest A620 chipset boards, even on the AM4 boards as well I think, as long as the BIOS manufacturer adds bifurcation support.

NGL, 1500$ to have a PCIe card that can't even have all slots working without a motherboard that supports bifurcation, and on top of that specifically x4+x4+x4+x4... this should not be 1500$. This is probably a 500$ card at best because it is PCIe 5.0 and sure, it does have the other lanes from the Broadcom chip, but still. It is UNNATURALLY F*CKING DIFFICULT to find a PCIe m.2 carrier card that can split an x8 or x16 signal down to x4 without bifurcation from the motherboard. I have an 11900K and Maximus 13 Hero in my VM/NAS PC, and I have been looking for potentially adding a dual m.2 PCIe card in that first slot as I already have an x4 SSD in the second slot, all other slots (except for the x1s, which I intend on populating later) are filled, and the board splits the CPU lanes to x8 in first slot, x4 in second slot, and x4 in one of the m.2 slots, so I can't split the first x8 into two groups. Every single card has the stupid bifurcation requirement, or doesn't support PCIe 4.0 at least. But of course, why would they lower the price? It's not like the people that can take full advantage of this card have the bifurcation dilemma anyways since they are likely using them in servers where they have full out x16 slots that can bifurcate, on top of thunderbolt/network cards for stupid fast NAS use.
 
Last edited:
I have a question friend, that power testing section wasn't used thr Quarch right? Since it measures only the PCIe and doesnt have any GPU 6Pin power conector did you use one of those GPU power meters? And if so? On the graphs it says "full system power". Would be great to see only the card draw
@W1zzard
 
Nice review.

I was recently checking 2x Crucial T705 on Gigabyte B650E Master (4x PCIe 5.0 x4), and RAID0 is nothing but disappointing. PCIe 4.0 RAID0 wasn't scaling really well but was still fine at 3-4 SSDs. PCIe 5.0 RAID0 is clearly worse than a single SSD in almost everything (except for sequential bandwidth). I know it depends on many factors, but on a typical desktop/gaming PC it's not worth it.

CrucialT705LE_res10.jpg
 
Maybe I missed this in the review, but how was RAID-0 created over the disks? AMD NVMe RAID or Windows Storage Spaces?
 
I suppose if you were editing 8K or larger this would fit a workstation very well. Not seeing much use for it outside that since random performance isn't scaling. Is there any CPU overhead?
I've got a Highpoint Solutions SSD7104 Gen 3.0x16 4x m.2 RAID AIC running as the first storage tier of my NAS (I've had a 10GBase-T home network for years) with four 2TB drives in it and it's been running without issue four years.

Is it a bit of an overkill for a home server? Yes, but it's super nice fully saturating a 10GBASE-T connection on upload or download, and I got it super cheap from a friend who is a Network tech for a big company. Just figured I'd mention another use case for one of them

Are there more products similar to this one in function for a significantly lower price that you'd recommend? Like even just with 3-4x Gen 5 spots instead of 8.
You could use one of their older cards, a Gen 3 or Gen 4 card, I've been using a 4x slot Gen 3 card of theirs for years
 
Yeah so the reason this doesn't work on Intel platforms is actually because on LGA1700, Intel moved away from x8+x4+x4 to x8+x8.
No, this can't be the true reason. The card has a PCIe switch built in (that's why it's so expensive). It uses the x16 interface to the CPU as a single x16 link, without even splitting it into two x8 links or depending on any bifurcation ability of the CPU itself. There's no reason to assume otherwise (however, it would still be nice if W1zz could confirm that).

Also, it works. But only at x8. There's some kind of incompatibility to blame here, maybe solvable by FW updates, maybe not.
 
Nice but for certain use cases only.
Personally I'd be happy with 4 drives running PCIe 5 x 4, and that MSI drive was killing it in the temperature tests (in a good way)
 
Last edited:
bifurcation
The card does not use bifurcation, as mentioned several times in the review. It wants an x16 5.0 connection


Maybe I missed this in the review, but how was RAID-0 created over the disks? AMD NVMe RAID or Windows Storage Spaces?
Classic Windows RAID, through disk management, not storage spaces

I have a question friend, that power testing section wasn't used thr Quarch right? Since it measures only the PCIe and doesnt have any GPU 6Pin power conector did you use one of those GPU power meters? And if so? On the graphs it says "full system power". Would be great to see only the card draw
@W1zzard
No plans for card only power. Measuring x16 at gen 5 without loss in signal integrity is hard
 
Lots of comments on the price. Two major things drive the cost:

1) It's VERY specialized. Anyone who truly needs something like this isn't going to be held back much by the asking price.

2) Volume. Electronics can be cheap when manufactured and sold in the hundreds of thousands or millions. This might sell in the tens of thousands.
 
I despise the fact that PCIe lane expansion chips like the PEX89048 have become server-grade levels of price. Back in the nForce days every board had a 16-to-32 lane chip and those didn't cost $500. And no, PCIe 5.0 does not warrant that cost.
 
All the attempted Intel boards share some kind of built-in bifurcation, either for x8+x8 slot configuration, or providing 5.0 M.2 slot (practically doing the same thing).
All the necessary routing and switching logic required may be a curse here.

@W1zzard
Is there any chance you could verify link negotiation on a simpler LGA1700 board, i.e. some PCIe 5.0 B760, or entry Z690?
 
This is an interesting review, different to the typical devices.

I like the idea of using a bridge chip, and I feel that should be the default on multi device cards, but I am not a fan of them slapping on a raid controller, I would rather this ran in some kind of dumb mode, where each device is detected individually as if was direct on a board.

I think what I would like to see is a card designed for 4x slots (as these are far more readily available) which support 2-4 M.2 using a bridge chip so no bifurcation needed and overprovisoning of lanes. Also priced at consumer level, dont have a controller chip.

I have two questions for @W1zzard

1 - On the ASPM testing, which specific BIOS options are enabled, is it just the native ASPM support, or are you turning them all on? As I have been fiddling with this stuff lately in trying to manage M.2 temps.

2 - Does the card have an option allowing some kind of dumb passthru mode presenting individual drives separately directly to OS including SMART? if not how would you use the drives separately?

All the attempted Intel boards share some kind of built-in bifurcation, either for x8+x8 slot configuration, or providing 5.0 M.2 slot (practically doing the same thing).
All the necessary routing and switching logic required may be a curse here.

@W1zzard
Is there any chance you could verify link negotiation on a simpler LGA1700 board, i.e. some PCIe 5.0 B760, or entry Z690?
Yep, I suspect this Intel issue will go away on Z890 if they do away with that lane split.

Also perhaps this could be tested on a Z690, to see if it stays as 16 lanes, as there is no built in mode that you described. It would still be Gen 5. As the first 16x slot is Gen 5.

ASRock Z690 Steel Legend has no lane sharing at all with the 16x PCIe slot, it doesnt share with 2nd PCIe either. So would be a good choice. But I expect there is other Z690 or B chipset boards that have no lane sharing.

Those who really need this won't satisficed by the pathetic 2 year warranty.
Yeah whats the deal with that, dont they have faith in their own kit? For something costing $1500 its awful.
 
Last edited:
2 - Does the card have an option allowing some kind of dumb passthru mode presenting individual drives separately directly to OS including SMART? if not how would you use the drives separately?
The Rocket 1608A is not a RAID card at all. The other variant, the 7608A, is. But its product page says "RAID Support: Single, RAID 0, 1, 10", and "single" means non-RAID, I assume. Besides, is any dedicated RAID controller so inflexible as to not even allow non-RAID single drives?

@W1zzard
Are there any specifications known about the SSDs themselves? Just some generic 2TB Phison E26 drives provided by Highpoint?

Also, do you have any plans to push the SLC caches to their limits with a multi-threaded write test? The whole setup should be able to absorb up to about 5 TB at the maximum speed of 56 GB/s, if it doesn't melt of course, and then drop to a slower mode - just like single SSDs to, just multiplied by 8.
 
Yep I tend to avoid RAID cards unless they have some kind of special passthru mode, one reason I like the dumb ASMedia cards. As things like ZFS work better without them and they can also block access to SMART stats. Thank you for explaining the 7608 bit yeah it does say that in the review.

Given this card has no raid chip, the price seems ridiculous, which makes the warranty seem even more bad as if it wasnt bad enough already. $1500 for 8 M.2 slots and a bridge chip.
 
Yep I tend to avoid RAID cards unless they have some kind of special passthru mode, one reason I like the dumb ASMedia cards. As things like ZFS work better without them and they can also block access to SMART stats. Thank you for explaining the 7608 bit yeah it does say that in the review.

Given this card has no raid chip, the price seems ridiculous, which makes the warranty seem even more bad as if it wasnt bad enough already. $1500 for 8 M.2 slots and a bridge chip.
It's a very niche product focused at business, not consumers. You're always going to pay through the posterior for those.
 
There are PCIe 3.0 and 4.0 variants of this card. When I first saw them, my mind quickly jumped to COMPACT TRUENAS SCALE ALL NMVE NAS. I was thinking something along the lines of:

1. Minisforum AR900i (or the 14th gen i-s variant) with 4 x NVME drive support
2. PCIe 16x -> 2 x 8x splitter (C-Payne PCB Design, 8x/8x bifurcation on MB required)
3. 2 x HighPoint Rocket cards each with 8 x NVME drive support (16 total)
4. Sell both kidneys for 18 x 8TB NVME drives (PCIe 3.0, 4.0 doesn't matter)
5. 1 x small NVME drive for OS
6. 1 x regular size NVME drive for Docker apps and VMs
7. smallest case that will fit these (like the original DAN A4, or a FORMD T1, but ideally even smaller than these)

If possible, something that would just need a HDPLEX 500W slim PSU. At this point would even consider a custom case.
Would also not mind just one of these HighPoint Rocket cards and forgo the 8x/8x splitter, but far less capacity.

I could not find info on these HighPoint Rocket cards supporting PCIe 8x or 4x. Since they use a bridge chip, this should be possible. Not interested in the speed, just the capacity expansion they provide.

A man can dream.
 
I seriously doubt it would make any kind of significant difference even for something like that over a single PCIe gen 5 drive.
lol, capacity is welcome too
 
Wow, I was not expecting that price. Damn.
 
All the attempted Intel boards share some kind of built-in bifurcation, either for x8+x8 slot configuration, or providing 5.0 M.2 slot (practically doing the same thing).
All the necessary routing and switching logic required may be a curse here.

@W1zzard
Is there any chance you could verify link negotiation on a simpler LGA1700 board, i.e. some PCIe 5.0 B760, or entry Z690?
The problem with LG1700 is that if you add a M2 drive that is 5.0 to the boards that support 5.0 will automatically set the PCIe lanes to 8x8 but then you don't have another M2 slot wired to the CPU so you essentially lose 4 lanes. That is probably the impetus for that Nvidia card that cams with an M2 slot.
 
Not likely, since the Broadcom PEX89048 Chip costs $497.25.
Someone beat me to it. ;)

Personally don't like this or Accelsior 8xM.2 Gen4 card from OWC. Blue theme is nice, but not much else. OWC/Highpoint have stupidly loud fans which spin without need when plenty of airflow is present and QC/warranty - outside US - is iffy business. Sonnet (so far only) 8xM.2 Gen 4 card is fully passive heatsink and need normal airflow for drives to stay cool at basically no noise - especially if using Noctua's 200mm fans in the case. :chefkiss:

Despite advertising aiming for movie studios these cards are ideal for density storage, not speed - only lunatic runs this in RAID0. You can do large drive pools from cheap-o M.2s.

The problem with LG1700 is that if you add a M2 drive that is 5.0 to the boards that support 5.0 will automatically set the PCIe lanes to 8x8 but then you don't have another M2 slot wired to the CPU so you essentially lose 4 lanes. That is probably the impetus for that Nvidia card that cams with an M2 slot.
With PLX/PEX cards even in x8 mode card will work just fine, just slower because switch has half of lanes assigned. Even if you use old x8 Gen3 Accelsior 4xM.2 adapter in electrically x4 slot OWC card works OK.
 
Last edited:
Are there any specifications known about the SSDs themselves? Just some generic 2TB Phison E26 drives provided by Highpoint?
It's a mix of all the Gen 5 drives that I've tested in the past. Not provided by Highpoint, these are my own drives.

1 - On the ASPM testing, which specific BIOS options are enabled, is it just the native ASPM support, or are you turning them all on? As I have been fiddling with this stuff lately in trying to manage M.2 temps.
Turn it on in BIOS, turn it on in Windows Power Settings

Does the card have an option allowing some kind of dumb passthru mode presenting individual drives separately directly to OS including SMART? if not how would you use the drives separately?
The drives are presented to the host as individual drives. The card has no RAID support. As mentioned in the review these drives appear the same way as drives inthe motherboard's native M.2 slots. You can see, manage and secure erase each drive individually in the BIOS
 
Back
Top