• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Kingston NV3

Joined
Jun 20, 2024
Messages
468 (2.10/day)
Indeed, and subconsciously I assumed everyone would be aware of the mechanics by now
Speaking of the mechanics of the reviews, legitimate question: What do you do with the drives once you've done the review?
Barring being a sample that specifically must be returned, as the memory density increases (both by the process limits and voltage seperation states defining the bits stored in a cell) it would be interesting to see you pack the some of the drive with validation data and then check it's validity say x amount of days/weeks/months later - I'd go with 6 months later as a reasonable option.

Ideally you'd want to make sure the data has been packed into a TLC/QLC state - maybe partition the drive and put the required check data in one partition, then do some filling of random data which exhausts the SLC cache and forces the drive to remap data before putting it in storage.

Hearing some mentions of drives massively loosing performance when being stored for a while and struggling to read back at anywhere near their rated speed - I think in some cases even having a failure.

Any half decent drive should function within a reasonable window, but Kingston sure do scrape the barrel a bit sometimes.
 
Joined
Aug 7, 2023
Messages
30 (0.06/day)
System Name SigmaMATER
Processor Ryzen 7 5800x3d
Motherboard x570 Aorus Master
Cooling Arctic Freezer iii 420
Memory Corsair Dominator Platinum 32gb 3600mz c15
Video Card(s) Rx 7900 xtx
Storage Crucial P5 plus 2tb, Crucial MX500 2tb, Seagate 2tb SSHD, Western Digital 10tb Ultrastar 2x
Display(s) Predator x27 and some lenovo and some other one
Case Corsair 7000D
Power Supply Evga 1600 p2
Mouse Logitech G Pro Wireless
Keyboard Evga z20
Software Windows 11 Enterprise
Fixed


Are you aware of any faster QLC drive?


Solidigm P44 Pro is the exact same drive, and I'm assuming they have sold more units than Hynix? I might be wrong though
i completely forgot the p44 exist, at least on amazon looks like the hynix sold more as of now but with the solidigm line getting pushed as hard as it is right now in the server line i can imagine by the end of the decade hynix will probably only use solidigm for there consumer branding from then on. so probably better to use the solidigm for any references in the future.
 
Joined
Apr 2, 2008
Messages
477 (0.08/day)
System Name -
Processor Ryzen 9 5900X
Motherboard MSI MEG X570
Cooling Arctic Liquid Freezer II 280 (4x140 push-pull)
Memory 32GB Patriot Steel DDR4 3733 (8GBx4)
Video Card(s) MSI RTX 4080 X-trio.
Storage Sabrent Rocket-Plus-G 2TB, Crucial P1 1TB, WD 1TB sata.
Display(s) LG Ultragear 34G750 nano-IPS 34" utrawide
Case Define R6
Audio Device(s) Xfi PCIe
Power Supply Fractal Design ION Gold 750W
Mouse Razer DeathAdder V2 Mini.
Keyboard Logitech K120
VR HMD Er no, pointless.
Software Windows 10 22H2
Benchmark Scores Timespy - 24522 | Crystalmark - 7100/6900 Seq. & 84/266 QD1 |
I'm using a 980pro and a 990pro with latest firmwares and I'm not seeing any issues.
Dont mean thet issue isnt real. You were one of the ones lucky enough not to be effected.
 
Joined
Sep 29, 2020
Messages
158 (0.10/day)
With QLC, each bit of SLC takes four bits of QLC, so 508 GB x 4 bits per SLC = 2032 GB. This large SLC cache is good...
What nonsense is this? A bit is a bit. With SLC, one cell holds one bit, while with QLC each cell hold 4 bits. But a 508GB cache is just that -- 508 GB.
 

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
28,089 (3.71/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
But a 508GB cache is just that -- 508 GB.
So if you write 508 GB into QLC cells operating as SLC, how much space do you have left?
 
Joined
Sep 29, 2020
Messages
158 (0.10/day)
So if you write 508 GB into QLC cells operating as SLC, how much space do you have left?
The 508GB dynamic cache will consume 2032GB of drive capacity -- but it's still only holding 508 GB of actual data.

Edit: re-reading the article, I assume by "large cache", the author meant the original 508GB size, and didn't intend to imply the cache had somehow grown to 4X its original capacity. The reverse is more nearly true: as you fill up the drive, your cache decreases in lock-step with free capacity. The cache is only "large" when your drive is near empty.
 
Last edited:

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
28,089 (3.71/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
I assume by "large cache", the author meant the original 508GB size
Yes, in contrast to drives with 32 GB, which is full in 10 seconds
 
Joined
Sep 29, 2020
Messages
158 (0.10/day)
Yes, in contrast to drives with 32 GB, which is full in 10 seconds
Ignoring the fact that DRAM cache is orders of magnitude faster than flash 'cache' -- even operating in SLC mode -- a drive with 32 GB of DRAM cache will have that size cache always. Even when the drive is totally full. This Kingston drive has that cache size only as long as you keep the drive 3/4 empty. I have to admit that's an excellent sales tactic ... by the time you lose most of your cache, the 30-day return window is already closed.

The situation is even worse than that. You can fill a 32GB DRAM cache with data in 10 seconds ... but a minute later, your cache is fully restored and ready to go, as the data gets flushed to permanent storage. Fill this drive's cache with data, and it never, ever recovers its write cache. Not unless you physically delete data from the drive.
 
Last edited:

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
28,089 (3.71/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
32GB DRAM cache with data in 10 seconds
You are confusing something here .. I meant the configured pSLC cache size. Look at the aging SATA drives. Also, NV2 has 88 GB
 
Joined
Sep 29, 2020
Messages
158 (0.10/day)
You are confusing something here .. I meant the configured pSLC cache size....
I assumed you were contrasting these to DRAM-cached drives. The fact remains that it's difficult to call this "larger cache" good, when its only that size on a new, empty drive.
 
Joined
Jan 3, 2021
Messages
3,751 (2.52/day)
Location
Slovenia
Processor i5-6600K
Motherboard Asus Z170A
Cooling some cheap Cooler Master Hyper 103 or similar
Memory 16GB DDR4-2400
Video Card(s) IGP
Storage Samsung 850 EVO 250GB
Display(s) 2x Oldell 24" 1920x1200
Case Bitfenix Nova white windowless non-mesh
Audio Device(s) E-mu 1212m PCI
Power Supply Seasonic G-360
Mouse Logitech Marble trackball, never had a mouse
Keyboard Key Tronic KT2000, no Win key because 1994
Software Oldwin
Ignoring the fact that DRAM cache is orders of magnitude faster than flash 'cache' -- even operating in SLC mode
Some of the fastest SSDs (Samsung 990 Pro, Crucial T500 and T700) have LPDDR4-4266 memory with a 32-bit data bus, hence 17.2 GB/s. Their highest write speed to SLC cache is ~7 GB/s (Gen 4) or ~11 GB/s (T700, which is Gen 5). That's about 0.4 orders of magnitude or less.
a drive with 32 GB
DRAM size is about 1/1000 of the SSD capacity. That seems to be the hard rule with almost no exceptions. But some 4 TB, Gen 5 SSDs have 8 GB of DRAM, including the T700.
of DRAM cache will have that size cache always. Even when the drive is totally full. This Kingston drive has that cache size only as long as you keep the drive 3/4 empty.
Here's the most important part: the DRAM is not the write cache. It does hold some metadata temporarily (the FTL), so it's probably not wrong to call it cache or buffer, but that's it. I'll also quote the most recent TPU review: "Two Micron DDR4-3200 chips provide a total of 2 GB of fast DRAM storage for the controller to store the mapping tables." If you have any proof or indication of the contrary, I'm genuinely interested.
32 GB of write cache would also need battery backup. Capacitors alone would be out of question.
I have to admit that's an excellent sales tactic ... by the time you lose most of your cache, the 30-day return window is already closed.
Those salesmen don't even tell you how large the cache is ... nor the things that matter more, such as non-queued random read speed. If that's your first SSD, you'll be happy anyway, for much more than 30 days. If it isn't, and you care about performance, you'll already understand the limits of SSDs.
The situation is even worse than that. You can fill a 32GB DRAM cache with data in 10 seconds ... but a minute later, your cache is fully restored and ready to go, as the data gets flushed to permanent storage. Fill this drive's cache with data, and it never, ever recovers its write cache. Not unless you physically delete data from the drive.
Hum hum, where did you get that? Look at this review for a nice example, the drive has already started the slow process of rewriting pSLC to TLC at 80% capacity. It's a pattern you'll see in many SSDs. When the SSD has some free space again, it can use it for caching again. If it has no free space then you don't need write caching because there's no free space.
1725316655334.png
 
Joined
Sep 29, 2020
Messages
158 (0.10/day)
I'll also quote the most recent TPU review: "Two Micron DDR4-3200 chips provide a total of 2 GB of fast DRAM storage for the controller to store the mapping tables." If you have any proof or indication of the contrary, I'm genuinely interested. 32 GB of write cache would also need battery backup. Capacitors alone would be out of question.
I had an IBM SSD with capacitor-based backup in case of power loss. That's been several years though, and a quick Google search doesn't show any recent drives with them, so I'll give you this point.

Hum hum, where did you get that?... If the drive has no free space then you don't need write caching because there's no free space.
Untrue. I have a 4TB drive that was fully allocated within an hour of installing it, yet it processes several hundred MB of writes per day. Databases are just one of many common applications that write to allocated space.l
 
Joined
Jan 11, 2009
Messages
9,252 (1.58/day)
Location
Montreal, Canada
System Name Homelabs
Processor Ryzen 5900x | Ryzen 1920X
Motherboard Asus ProArt x570 Creator | AsRock X399 fatal1ty gaming
Cooling Silent Loop 2 280mm | Dark Rock Pro TR4
Memory 128GB (4x32gb) DDR4 3600Mhz | 128GB (8x16GB) DDR4 2933Mhz
Video Card(s) EVGA RTX 3080 | ASUS Strix GTX 970
Storage Optane 900p + NVMe | Optane 900p + 8TB SATA SSDs + 48TB HDDs
Display(s) Alienware AW3423dw QD-OLED | HP Omen 32 1440p
Case be quiet! Dark Base Pro 900 rev 2 | be quiet! Silent Base 800
Power Supply Corsair RM750x + sleeved cables| EVGA P2 750W
Mouse Razer Viper Ultimate (still has buttons on the right side, crucial as I'm a southpaw)
Keyboard Razer Huntsman Elite, Pro Type | Logitech G915 TKL
I had an IBM SSD with capacitor-based backup in case of power loss. That's been several years though, and a quick Google search doesn't show any recent drives with them, so I'll give you this point.


Untrue. I have a 4TB drive that was fully allocated within an hour of installing it, yet it processes several hundred MB of writes per day. Databases are just one of many common applications that write to allocated space.l

Enterprise/DataCenter SSDs do have capacitors a.ka. hardware PLP (Power Loss Protection), though for m.2 it'll be double-sided and much more expensive than consumer drives. All drives have overprovisioned space beyond "100%", generally at least the 7.37% difference between GiB (gibibytes) and GB, else TRIM and garbage collection wouldn't work. Allocating non-dynamic space to a DB (or VM) just means that that space is now managed by the DB, it's still free space from the point of view of that DB and not having enough free space will TANK your performance. You usually want the transaction logs on a separate drive and have to make sure that also have sufficient space too etc.

Point is, having the ability to use your whole drive as pSLC cache is great, a trend we are starting to see in recent drives. HMB DRAM-less drives have good performance for the great majority of cases for consumers, while being cheaper and MUCH more power-efficient. The fact that QLC has come this far is also very encouraging! As the review correctly states, this would be great if it were priced correctly, it being more expensive than similar drives with TLC NAND is a no-go.
 
Last edited:
Joined
Sep 29, 2020
Messages
158 (0.10/day)
Allocating non-dynamic space to a DB (or VM) just means that that space is now managed by the DB, it's still free space from the point of view of that DB
...and? It's still not free space as far as the *drive* is concerned, which means your brand new drive now has zero cache.

...Point is, having the ability to use your whole drive as pSLC cache is great
The point is that you only have that ability until you actually start using the drive .... once you install an OS and the first piece of software, it's no longer true. If a person keeps their drives at even 50% capacity in normal use, then only half the drive can be used as pSLC cache -- and most people use more than half their drive.
 
Joined
Feb 18, 2005
Messages
5,945 (0.82/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) Dell S3221QS(A) (32" 38x21 60Hz) + 2x AOC Q32E2N (32" 25x14 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G604
Keyboard Razer Pro Type Ultra
Software Windows 10 Professional x64
All these children crying their eyes out at the fact that this drive drops to 200MB/s after writing a sequential 508GB of data... when none of them regularly write a full quarter of the drive's capacity worth of data anyway.
 
Joined
Mar 6, 2017
Messages
3,369 (1.17/day)
Location
North East Ohio, USA
System Name My Ryzen 7 7700X Super Computer
Processor AMD Ryzen 7 7700X
Motherboard Gigabyte B650 Aorus Elite AX
Cooling DeepCool AK620 with Arctic Silver 5
Memory 2x16GB G.Skill Trident Z5 NEO DDR5 EXPO (CL30)
Video Card(s) XFX AMD Radeon RX 7900 GRE
Storage Samsung 980 EVO 1 TB NVMe SSD (System Drive), Samsung 970 EVO 500 GB NVMe SSD (Game Drive)
Display(s) Acer Nitro XV272U (DisplayPort) and Acer Nitro XV270U (DisplayPort)
Case Lian Li LANCOOL II MESH C
Audio Device(s) On-Board Sound / Sony WH-XB910N Bluetooth Headphones
Power Supply MSI A850GF
Mouse Logitech M705
Keyboard Steelseries
Software Windows 11 Pro 64-bit
Benchmark Scores https://valid.x86.fr/liwjs3
Why do I feel like QLC-NAND is a solution looking for a problem? As many users have already stated, there's better and cheaper SSDs of the same capacities that use TLC-NAND as versus QLC-NAND which, as the review data has stated, basically sucks.
 
Joined
Jun 20, 2024
Messages
468 (2.10/day)
All these children crying their eyes out at the fact that this drive drops to 200MB/s after writing a sequential 508GB of data... when none of them regularly write a full quarter of the drive's capacity worth of data anyway.
Agreed, for most it would be inconsequential but it is a bit of a guide as to how good the controller and NAND is in terms of performance, especially as drive free space drops.
Probably part of the reason they set a massive cache size due to it's poor performance when doing QLC remap of data although, in my mind, the methodology described in the review shouldn't really result in a scenario where the SLC cache is even used - the controllers should be smarter than they are.
I'm amazed the logic in these controllers hasn't been programmed to figure out that if receiving a constant stream of write operations with no interruptions (i.e. read requests or other instructions), just pack the incoming data into a QLC defined block - the physical memory cell programming speed/time should be no different writing as SLC or QLC, just the controller deciding what to do with the data might be the only performance impact.

Obviously there are some situations where this may not be possible such as random writes, etc., but if a stream of data coming in is bigger than 16/32K and would fill the 4/8K block (or whatever size they use), why write it out as an SLC operation.... It's not like they are shouting about massively improved P/E cycles and this moving data from SLC cache to pack it into a QLC block is just additional write amplification (something very few reviews cover these days and I'm sure it's worse now than it was a decade ago - has anyone managed to better Sandforce's <1 WA factor?).

Even drives with DRAM cache work in the same dumb way (looking at old review of Corsair MP400)...

I can only infer that:
a) they are either too lazy to tackle the problem
b) the issue is actually more of a NVM spec problem that prohibits this
c) it's an OS issue (although nothing to suggest *nix/BSD/other is any better)
d) "firmware is hard to write man... I don't get paid for how many lines of code there are... stop picking holes!"
e) "a troll is blocking the path"... i.e. some patent is stopping this actually being done

Although I think many years ago I vaguely remember this being something some drives aimed for DC use would do...
 
Joined
Sep 29, 2020
Messages
158 (0.10/day)
All these children crying their eyes out at the fact that this drive drops to 200MB/s after writing a sequential 508GB of data... when none of them regularly write a full quarter of the drive's capacity worth of data anyway.
Misunderstandings such as yours are why I made the initial post. Your statement is correct **only** if you begin writing to an empty drive. Start writing to a drive already fully allocated, and your write speeds start at 200MB/sec at the very first byte written. Apps from databases to document management systems to many others perform update-in-place writes.

Even for those of you not using such applications, the rule often quoted is that, for good performance, you want to keep at least 10% of a drive unallocated. Well, using a drive like this, if you want that 10% free drive to be writeable at that full 4500MB/sec benchmark figure, then you need to leave 40% of your drive free at all times.
 
Joined
Feb 18, 2005
Messages
5,945 (0.82/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) Dell S3221QS(A) (32" 38x21 60Hz) + 2x AOC Q32E2N (32" 25x14 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G604
Keyboard Razer Pro Type Ultra
Software Windows 10 Professional x64
Start writing to a drive already fully allocated, and your write speeds start at 200MB/sec at the very first byte written.
If you're writing to a almost-full drive you are going to see performance issues regardless of which drive it is.
 
Joined
Sep 29, 2020
Messages
158 (0.10/day)
If you're writing to a almost-full drive you are going to see performance issues regardless of which drive it is.
Wrong on two counts. As mentioned above, I have a 4TB drive that's been 100% fully allocated since installed, yet still is regularly written to at full speed, with zero performance issues. Such update-in-place writes are a common, albeit not ubiquitous scenario. Had I been using this drive, every byte written would be at the same 200MB/s pace that this review calls "some of the worst results we've seen".

But even in the case of general system usage, a drive with a fixed cache will operate fine with 10% free space. This drive though would either have a fully un-cached 10% free that writes at a snails-paced 200MB/s, or a cached, but jam-packed 2.5% free. Either scenario will certainly cause performance issues.
 
Joined
Apr 18, 2019
Messages
2,445 (1.16/day)
Location
Olympia, WA
System Name Sleepy Painter
Processor AMD Ryzen 5 3600
Motherboard Asus TuF Gaming X570-PLUS/WIFI
Cooling FSP Windale 6 - Passive
Memory 2x16GB F4-3600C16-16GVKC @ 16-19-21-36-58-1T
Video Card(s) MSI RX580 8GB
Storage 2x Samsung PM963 960GB nVME RAID0, Crucial BX500 1TB SATA, WD Blue 3D 2TB SATA
Display(s) Microboard 32" Curved 1080P 144hz VA w/ Freesync
Case NZXT Gamma Classic Black
Audio Device(s) Asus Xonar D1
Power Supply Rosewill 1KW on 240V@60hz
Mouse Logitech MX518 Legend
Keyboard Red Dragon K552
Software Windows 10 Enterprise 2019 LTSC 1809 17763.1757
Is the Solidigm P41 Plus EoL or something? To my knowledge, it would've been this drive's closest competitor.
At least previously, the P41+ was the fastest DRAMless QLC gen4 (budget) NVME.
 
Joined
Mar 31, 2018
Messages
53 (0.02/day)
NVx are worse every generation, besides that also Kingston cheaps out every model, NV1 was at start a TLC one and they changed the spec to QLC silently, but kept the same part numbers, fishy... I was using an NV2 500GB at work on a Dell Mini and cloned back to an A2000 available because i find disturbing how easy is to choke the NV2 QLC with relatively light loads, latency spikes and the system becomes clearly less responsive. Stay away of this hardware unless you don't have a choice.
 
Joined
Jul 21, 2020
Messages
49 (0.03/day)
Location
Bavaria, Germany
Yeah, but for the KC-versions the used hardware seems to be stable


@techpowerup, which Tool u use for the pSLC Cache / Write Intensive Usage Tests?
 
Joined
Jan 3, 2021
Messages
3,751 (2.52/day)
Location
Slovenia
Processor i5-6600K
Motherboard Asus Z170A
Cooling some cheap Cooler Master Hyper 103 or similar
Memory 16GB DDR4-2400
Video Card(s) IGP
Storage Samsung 850 EVO 250GB
Display(s) 2x Oldell 24" 1920x1200
Case Bitfenix Nova white windowless non-mesh
Audio Device(s) E-mu 1212m PCI
Power Supply Seasonic G-360
Mouse Logitech Marble trackball, never had a mouse
Keyboard Key Tronic KT2000, no Win key because 1994
Software Oldwin
Yeah, but for the KC-versions the used hardware seems to be stable
Take a look at the KC3000 and Fury Renegade product pages. Not only do they specify TBW and TLC but also the controller, which is rare. You risk (huh?) getting Kioxia TLC instead of Micron TLC, though.

@techpowerup, which Tool u use for the pSLC Cache / Write Intensive Usage Tests?
You didn't summon the right wizard. @W1zzard , there's a username on your forums that easily confuses people.
 
Top