• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Kingston NV3

Indeed, and subconsciously I assumed everyone would be aware of the mechanics by now
Speaking of the mechanics of the reviews, legitimate question: What do you do with the drives once you've done the review?
Barring being a sample that specifically must be returned, as the memory density increases (both by the process limits and voltage seperation states defining the bits stored in a cell) it would be interesting to see you pack the some of the drive with validation data and then check it's validity say x amount of days/weeks/months later - I'd go with 6 months later as a reasonable option.

Ideally you'd want to make sure the data has been packed into a TLC/QLC state - maybe partition the drive and put the required check data in one partition, then do some filling of random data which exhausts the SLC cache and forces the drive to remap data before putting it in storage.

Hearing some mentions of drives massively loosing performance when being stored for a while and struggling to read back at anywhere near their rated speed - I think in some cases even having a failure.

Any half decent drive should function within a reasonable window, but Kingston sure do scrape the barrel a bit sometimes.
 
Fixed


Are you aware of any faster QLC drive?


Solidigm P44 Pro is the exact same drive, and I'm assuming they have sold more units than Hynix? I might be wrong though
i completely forgot the p44 exist, at least on amazon looks like the hynix sold more as of now but with the solidigm line getting pushed as hard as it is right now in the server line i can imagine by the end of the decade hynix will probably only use solidigm for there consumer branding from then on. so probably better to use the solidigm for any references in the future.
 
I'm using a 980pro and a 990pro with latest firmwares and I'm not seeing any issues.
Dont mean thet issue isnt real. You were one of the ones lucky enough not to be effected.
 
With QLC, each bit of SLC takes four bits of QLC, so 508 GB x 4 bits per SLC = 2032 GB. This large SLC cache is good...
What nonsense is this? A bit is a bit. With SLC, one cell holds one bit, while with QLC each cell hold 4 bits. But a 508GB cache is just that -- 508 GB.
 
But a 508GB cache is just that -- 508 GB.
So if you write 508 GB into QLC cells operating as SLC, how much space do you have left?
 
So if you write 508 GB into QLC cells operating as SLC, how much space do you have left?
The 508GB dynamic cache will consume 2032GB of drive capacity -- but it's still only holding 508 GB of actual data.

Edit: re-reading the article, I assume by "large cache", the author meant the original 508GB size, and didn't intend to imply the cache had somehow grown to 4X its original capacity. The reverse is more nearly true: as you fill up the drive, your cache decreases in lock-step with free capacity. The cache is only "large" when your drive is near empty.
 
Last edited:
I assume by "large cache", the author meant the original 508GB size
Yes, in contrast to drives with 32 GB, which is full in 10 seconds
 
Yes, in contrast to drives with 32 GB, which is full in 10 seconds
Ignoring the fact that DRAM cache is orders of magnitude faster than flash 'cache' -- even operating in SLC mode -- a drive with 32 GB of DRAM cache will have that size cache always. Even when the drive is totally full. This Kingston drive has that cache size only as long as you keep the drive 3/4 empty. I have to admit that's an excellent sales tactic ... by the time you lose most of your cache, the 30-day return window is already closed.

The situation is even worse than that. You can fill a 32GB DRAM cache with data in 10 seconds ... but a minute later, your cache is fully restored and ready to go, as the data gets flushed to permanent storage. Fill this drive's cache with data, and it never, ever recovers its write cache. Not unless you physically delete data from the drive.
 
Last edited:
32GB DRAM cache with data in 10 seconds
You are confusing something here .. I meant the configured pSLC cache size. Look at the aging SATA drives. Also, NV2 has 88 GB
 
You are confusing something here .. I meant the configured pSLC cache size....
I assumed you were contrasting these to DRAM-cached drives. The fact remains that it's difficult to call this "larger cache" good, when its only that size on a new, empty drive.
 
Ignoring the fact that DRAM cache is orders of magnitude faster than flash 'cache' -- even operating in SLC mode
Some of the fastest SSDs (Samsung 990 Pro, Crucial T500 and T700) have LPDDR4-4266 memory with a 32-bit data bus, hence 17.2 GB/s. Their highest write speed to SLC cache is ~7 GB/s (Gen 4) or ~11 GB/s (T700, which is Gen 5). That's about 0.4 orders of magnitude or less.
a drive with 32 GB
DRAM size is about 1/1000 of the SSD capacity. That seems to be the hard rule with almost no exceptions. But some 4 TB, Gen 5 SSDs have 8 GB of DRAM, including the T700.
of DRAM cache will have that size cache always. Even when the drive is totally full. This Kingston drive has that cache size only as long as you keep the drive 3/4 empty.
Here's the most important part: the DRAM is not the write cache. It does hold some metadata temporarily (the FTL), so it's probably not wrong to call it cache or buffer, but that's it. I'll also quote the most recent TPU review: "Two Micron DDR4-3200 chips provide a total of 2 GB of fast DRAM storage for the controller to store the mapping tables." If you have any proof or indication of the contrary, I'm genuinely interested.
32 GB of write cache would also need battery backup. Capacitors alone would be out of question.
I have to admit that's an excellent sales tactic ... by the time you lose most of your cache, the 30-day return window is already closed.
Those salesmen don't even tell you how large the cache is ... nor the things that matter more, such as non-queued random read speed. If that's your first SSD, you'll be happy anyway, for much more than 30 days. If it isn't, and you care about performance, you'll already understand the limits of SSDs.
The situation is even worse than that. You can fill a 32GB DRAM cache with data in 10 seconds ... but a minute later, your cache is fully restored and ready to go, as the data gets flushed to permanent storage. Fill this drive's cache with data, and it never, ever recovers its write cache. Not unless you physically delete data from the drive.
Hum hum, where did you get that? Look at this review for a nice example, the drive has already started the slow process of rewriting pSLC to TLC at 80% capacity. It's a pattern you'll see in many SSDs. When the SSD has some free space again, it can use it for caching again. If it has no free space then you don't need write caching because there's no free space.
1725316655334.png
 
I'll also quote the most recent TPU review: "Two Micron DDR4-3200 chips provide a total of 2 GB of fast DRAM storage for the controller to store the mapping tables." If you have any proof or indication of the contrary, I'm genuinely interested. 32 GB of write cache would also need battery backup. Capacitors alone would be out of question.
I had an IBM SSD with capacitor-based backup in case of power loss. That's been several years though, and a quick Google search doesn't show any recent drives with them, so I'll give you this point.

Hum hum, where did you get that?... If the drive has no free space then you don't need write caching because there's no free space.
Untrue. I have a 4TB drive that was fully allocated within an hour of installing it, yet it processes several hundred MB of writes per day. Databases are just one of many common applications that write to allocated space.l
 
I had an IBM SSD with capacitor-based backup in case of power loss. That's been several years though, and a quick Google search doesn't show any recent drives with them, so I'll give you this point.


Untrue. I have a 4TB drive that was fully allocated within an hour of installing it, yet it processes several hundred MB of writes per day. Databases are just one of many common applications that write to allocated space.l

Enterprise/DataCenter SSDs do have capacitors a.ka. hardware PLP (Power Loss Protection), though for m.2 it'll be double-sided and much more expensive than consumer drives. All drives have overprovisioned space beyond "100%", generally at least the 7.37% difference between GiB (gibibytes) and GB, else TRIM and garbage collection wouldn't work. Allocating non-dynamic space to a DB (or VM) just means that that space is now managed by the DB, it's still free space from the point of view of that DB and not having enough free space will TANK your performance. You usually want the transaction logs on a separate drive and have to make sure that also have sufficient space too etc.

Point is, having the ability to use your whole drive as pSLC cache is great, a trend we are starting to see in recent drives. HMB DRAM-less drives have good performance for the great majority of cases for consumers, while being cheaper and MUCH more power-efficient. The fact that QLC has come this far is also very encouraging! As the review correctly states, this would be great if it were priced correctly, it being more expensive than similar drives with TLC NAND is a no-go.
 
Last edited:
Allocating non-dynamic space to a DB (or VM) just means that that space is now managed by the DB, it's still free space from the point of view of that DB
...and? It's still not free space as far as the *drive* is concerned, which means your brand new drive now has zero cache.

...Point is, having the ability to use your whole drive as pSLC cache is great
The point is that you only have that ability until you actually start using the drive .... once you install an OS and the first piece of software, it's no longer true. If a person keeps their drives at even 50% capacity in normal use, then only half the drive can be used as pSLC cache -- and most people use more than half their drive.
 
All these children crying their eyes out at the fact that this drive drops to 200MB/s after writing a sequential 508GB of data... when none of them regularly write a full quarter of the drive's capacity worth of data anyway.
 
Why do I feel like QLC-NAND is a solution looking for a problem? As many users have already stated, there's better and cheaper SSDs of the same capacities that use TLC-NAND as versus QLC-NAND which, as the review data has stated, basically sucks.
 
All these children crying their eyes out at the fact that this drive drops to 200MB/s after writing a sequential 508GB of data... when none of them regularly write a full quarter of the drive's capacity worth of data anyway.
Agreed, for most it would be inconsequential but it is a bit of a guide as to how good the controller and NAND is in terms of performance, especially as drive free space drops.
Probably part of the reason they set a massive cache size due to it's poor performance when doing QLC remap of data although, in my mind, the methodology described in the review shouldn't really result in a scenario where the SLC cache is even used - the controllers should be smarter than they are.
I'm amazed the logic in these controllers hasn't been programmed to figure out that if receiving a constant stream of write operations with no interruptions (i.e. read requests or other instructions), just pack the incoming data into a QLC defined block - the physical memory cell programming speed/time should be no different writing as SLC or QLC, just the controller deciding what to do with the data might be the only performance impact.

Obviously there are some situations where this may not be possible such as random writes, etc., but if a stream of data coming in is bigger than 16/32K and would fill the 4/8K block (or whatever size they use), why write it out as an SLC operation.... It's not like they are shouting about massively improved P/E cycles and this moving data from SLC cache to pack it into a QLC block is just additional write amplification (something very few reviews cover these days and I'm sure it's worse now than it was a decade ago - has anyone managed to better Sandforce's <1 WA factor?).

Even drives with DRAM cache work in the same dumb way (looking at old review of Corsair MP400)...

I can only infer that:
a) they are either too lazy to tackle the problem
b) the issue is actually more of a NVM spec problem that prohibits this
c) it's an OS issue (although nothing to suggest *nix/BSD/other is any better)
d) "firmware is hard to write man... I don't get paid for how many lines of code there are... stop picking holes!"
e) "a troll is blocking the path"... i.e. some patent is stopping this actually being done

Although I think many years ago I vaguely remember this being something some drives aimed for DC use would do...
 
All these children crying their eyes out at the fact that this drive drops to 200MB/s after writing a sequential 508GB of data... when none of them regularly write a full quarter of the drive's capacity worth of data anyway.
Misunderstandings such as yours are why I made the initial post. Your statement is correct **only** if you begin writing to an empty drive. Start writing to a drive already fully allocated, and your write speeds start at 200MB/sec at the very first byte written. Apps from databases to document management systems to many others perform update-in-place writes.

Even for those of you not using such applications, the rule often quoted is that, for good performance, you want to keep at least 10% of a drive unallocated. Well, using a drive like this, if you want that 10% free drive to be writeable at that full 4500MB/sec benchmark figure, then you need to leave 40% of your drive free at all times.
 
Start writing to a drive already fully allocated, and your write speeds start at 200MB/sec at the very first byte written.
If you're writing to a almost-full drive you are going to see performance issues regardless of which drive it is.
 
If you're writing to a almost-full drive you are going to see performance issues regardless of which drive it is.
Wrong on two counts. As mentioned above, I have a 4TB drive that's been 100% fully allocated since installed, yet still is regularly written to at full speed, with zero performance issues. Such update-in-place writes are a common, albeit not ubiquitous scenario. Had I been using this drive, every byte written would be at the same 200MB/s pace that this review calls "some of the worst results we've seen".

But even in the case of general system usage, a drive with a fixed cache will operate fine with 10% free space. This drive though would either have a fully un-cached 10% free that writes at a snails-paced 200MB/s, or a cached, but jam-packed 2.5% free. Either scenario will certainly cause performance issues.
 
Is the Solidigm P41 Plus EoL or something? To my knowledge, it would've been this drive's closest competitor.
At least previously, the P41+ was the fastest DRAMless QLC gen4 (budget) NVME.
 
NVx are worse every generation, besides that also Kingston cheaps out every model, NV1 was at start a TLC one and they changed the spec to QLC silently, but kept the same part numbers, fishy... I was using an NV2 500GB at work on a Dell Mini and cloned back to an A2000 available because i find disturbing how easy is to choke the NV2 QLC with relatively light loads, latency spikes and the system becomes clearly less responsive. Stay away of this hardware unless you don't have a choice.
 
Yeah, but for the KC-versions the used hardware seems to be stable


@techpowerup, which Tool u use for the pSLC Cache / Write Intensive Usage Tests?
 
Yeah, but for the KC-versions the used hardware seems to be stable
Take a look at the KC3000 and Fury Renegade product pages. Not only do they specify TBW and TLC but also the controller, which is rare. You risk (huh?) getting Kioxia TLC instead of Micron TLC, though.

@techpowerup, which Tool u use for the pSLC Cache / Write Intensive Usage Tests?
You didn't summon the right wizard. @W1zzard , there's a username on your forums that easily confuses people.
 
Back
Top