• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Orico O7000 2 TB

Joined
Feb 20, 2019
Messages
8,339 (3.91/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
What's problem with QLC?
Terrible HDD-like speeds when the cache is full, and the size of the cache is proportional to the free space remaining. If your SSD is 80% full, you'll blow through the cache in 10 seconds and then you have a shitty drive that's as bad as a mechanical hard drive from 25 years ago.

Also, the endurance is 1/4 of the TLC NAND, and that limits what you can use the drive for. They're a poor choice for anything that's doing large writes to the drive, whether that's in a PC or a storage device.
is it not cheaper at $56 per TB? TBH I don't keep track of the differences between them so genuine question?
Read the pricing and alternatives section of the conclusion. The $/TB chart uses the numbers of other drives at their review price. The competitive SN770 and SN580 have come down in price with the general decline in global NAND pricing since their launch, so their $75/TB showing in the chart is contradicted by W1zzard stating that the faster, better, TLC-equipped SN770 2TB is $120 right now, not $150 as per the chart at $75/TB.

Regional pricing makes a big difference too; My NV2, NV3, SN580, SN770, NM790 prices are different to W1zzard's - you basically need to know which drives are low-endurance, cache-sensitive QLC, and then work out if you're getting a deal or getting ripped off on a drive-by-drive, country-by-country basis.
Tbh the cache is large enough that you'll have to try really hard to exhaust it.
QLC drives aren't good for huge amounts of writing, so their applications are reduced. You'd kill it as a NAS/storage write-cache, you'd kill it hosting VMs with RAM overlap, and you'd kill it with regular video editing. As such, QLC drives are best suited to casual user OS disks and game libraries. Thinking of game libraries in particular, they tend to be full no matter what size your SSD is because you just evict 150GB games when you run out of space. The pSLC cache shrinks with the free space, so if you game library is almost full, you'll be copying a game to it at HDD-like speeds after the first 15GB have completely filled the almost non-existent cache.
 
Joined
Jul 5, 2013
Messages
28,257 (6.75/day)
For the price, this is not shabby for QLC. I what improvements to durability have been made?

Also, the endurance is 1/4 of the TLC NAND, and that limits what you can use the drive for. They're a poor choice for anything that's doing large writes to the drive, whether that's in a PC or a storage device.
You mean 1/2? QLC generally has half the durability as TLC. Even 1/3 would be closer to accurate.
 

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
27,963 (3.72/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
what improvements to durability have been made?
Has durability been a problem in the last 10 years? Every single disk I look at, including our servers, is not even close to a meaningful TBW value, even after many years
 
Joined
Jul 5, 2013
Messages
28,257 (6.75/day)
Has durability been a problem in the last 10 years? Every single disk I look at, including our servers, is not even close to a meaningful TBW value, even after many years
I've had some QLC based drives come back to me. They are far more problematic than TLC based drives. I've completely stopped carrying them as a result. Any improvement would be welcome and reason to reconsider.
 

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
27,963 (3.72/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
I've had some QLC based drives come back to me. They are far more problematic than TLC based drives. I've completely stopped carrying them as a result. Any improvement would be welcome and reason to reconsider.
Anything with YMTC QLC? Or only Micron?
 
Joined
Feb 20, 2019
Messages
8,339 (3.91/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
For the price, this is not shabby for QLC. I what improvements to durability have been made?


You mean 1/2? QLC generally has half the durability as TLC. Even 1/3 would be closer to accurate.
it's a very "it depends" value. What you tend to see is a lot of TLC drives understating their endurance values so that the manufacturer has the option to switch out to QLC at a later date.

From a raw NAND specs comparison, ie, ignoring the SSD or the controller - Current-gen TLC is good for around 1000 P/E cycles, and QLC is good for 300 or so, and suffers additional write amplification from a much greater reliance on pSLC to TLC conversion writes. We've seen approximately a 3.3x reduction in endurance at the NAND layer for each extra bit added:

SLC = 10000 P/E Cycles
MLC = 3000 P/E Cycles
TLC = 1000 P/E Cycles
QLC = 300 P/E Cycles

Those numbers are not the number of drive writes, since write-amplification is offset by compression, thin-provisioning, over-provisioning, and any other controller/firmware magic that enters the picture for any given vendor/controller combination.
 

bug

Joined
May 22, 2015
Messages
13,843 (3.95/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
To me, the biggest sign that QLC has hit rock bottom is that you don't see PLC even being discussed, let alone worked on.
 
Joined
Feb 20, 2019
Messages
8,339 (3.91/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
To me, the biggest sign that QLC has hit rock bottom is that you don't see PLC even being discussed, let alone worked on.
Based on my reply immediately above yours, very basic extrapolation would infer that a successful PLC NAND product would be expected to have only 100 P/E cycles and raw write speeds so low that a dying MicroSD card could compete.
 
  • Like
Reactions: bug

bug

Joined
May 22, 2015
Messages
13,843 (3.95/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
Based on the above reply, very basic extrapolation would infer that an successful PLC NAND product would have only 100 P/E cycles and raw write speeds so low that a dying MicroSD card could compete.
If that.
 
Joined
Jul 5, 2013
Messages
28,257 (6.75/day)
SLC = 10000 P/E Cycles
MLC = 3000 P/E Cycles
TLC = 1000 P/E Cycles
QLC = 300 P/E Cycles
Those numbers are not even close to real. Corrections below;

SLC = 80,000 P/E Cycles
MLC = 30,000 P/E Cycles
TLC = 4000 P/E Cycles
QLC = 900 P/E Cycles

While these are general approximate numbers, they are closer to actual real world expectations.

you don't see PLC even being discussed, let alone worked on.
This is because researchers still can't prevent PLC from suffering voltage induced cascade NAND cell failure.
 
Joined
Feb 20, 2019
Messages
8,339 (3.91/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
Joined
Jul 5, 2013
Messages
28,257 (6.75/day)
Ooof, that drop from MLC > TLC is rough.
It really is. That drop highlights the problem with the cascading effect the voltages applied during the write cycles have on the NAND. And please understand I am WAY over simplifying it.

For NAND write cycles, a voltage is applied which sets a charge, and thus a bit. For SLC, that's just one voltage cycle at the base voltage. For MLC, it gets two cycles, one of the base voltage and then again for a secondary, higher voltage and for a slightly longer period of time. This is what causes NAND cell wear. TLC is three of those cycles, and a bit more time still and QLC four with each step applying an increased voltage. While those differences are small, over time they take a toll on the NAND cell and thus affect durability.

This is why NAND cells wear out. The ability to withstand the voltage degrades with each voltage application and the chemistry of the NAND cells breaks down until it reaches a point where the cell can no long function. All NAND will fail, it's a mathematical certainty. How quickly it fails depends on operational conditions and specifications.

The above described cycling is also why SLC caching works so well and is very fast, not as many operations to complete all at once. This is why the drive performance shown in the review @W1zzard posted above showed the performance literally "falling off a cliff". The drive uses up all of the cells in SLC mode and then has to not only sustain more writes but also start the QLC write-cycling. Writing to NAND cells in a QLC write mode is VERY slow.
 
Joined
Jul 29, 2022
Messages
529 (0.60/day)
I have a b550 board and only the 1st M2 slot is pcie 4.0, the 2nd is 3.0 and likely the 3 other pcie slots aside from the 1st pcie 4.0 x 16 slot, this is not common?
It is not only common, but it has been like that since basically forever.
AM5 too has usually one PCIE/m.2 5.0 slot, and the rest is 4.0 or even 3.0 (unless you go some GPU slot-less mini PC board where they can afford having 2-3x 5.0 m.2 slots). B550 was 4.0 on the first PCIE/m.2 slots and 3.0 on all the rest. B450 and all the way back to Ivy Bridge was PCIE 3.0 on the primary slots and 2.0 on all the rest.
It's why I was so glad that B550 finally had the secondary slots upgraded to 3.0, since I was using a combination of matx and 10gbe cards, and no matx cards had secondary x8 or larger slots, so I had to get 3.0 x4 for optimal speeds and nothing prior to B550 had that (not counting the threadwanker boards which don't even have matx versions).

People just don't know about it, since nowadays almost nobody uses the secondary slots for anything speed intensive, at most they have a meme soundcard in the x1 slot, or use USB for everything. The latter is even more horrible given how much overhead USB adds for everything.
 
Joined
Sep 27, 2008
Messages
1,210 (0.20/day)
You love copy/pasting this:

but honestly, who even has PCIe 3.0 running in their system in 2024?
PCIe 4.0 is perfectly fine (there is no point in paying the PCIe 5.0 tax), but that formula feels like you're bending over backwards over the wrong argument.

Also, throttling while reading, there's something you don't see every day (yes, I get the heatsink is included, kudos for that).
A lot of AM4 systems are PCI-E 3.0.
A520, 400 series, 300 series chipsets. Systems with APUs are also limited to 3.0
 
Joined
Feb 1, 2019
Messages
3,666 (1.70/day)
Location
UK, Midlands
System Name Main PC
Processor 13700k
Motherboard Asrock Z690 Steel Legend D4 - Bios 13.02
Cooling Noctua NH-D15S
Memory 32 Gig 3200CL14
Video Card(s) 4080 RTX SUPER FE 16G
Storage 1TB 980 PRO, 2TB SN850X, 2TB DC P4600, 1TB 860 EVO, 2x 3TB WD Red, 2x 4TB WD Red
Display(s) LG 27GL850
Case Fractal Define R4
Audio Device(s) Soundblaster AE-9
Power Supply Antec HCG 750 Gold
Software Windows 10 21H2 LTSC
it's a very "it depends" value. What you tend to see is a lot of TLC drives understating their endurance values so that the manufacturer has the option to switch out to QLC at a later date.

From a raw NAND specs comparison, ie, ignoring the SSD or the controller - Current-gen TLC is good for around 1000 P/E cycles, and QLC is good for 300 or so, and suffers additional write amplification from a much greater reliance on pSLC to TLC conversion writes. We've seen approximately a 3.3x reduction in endurance at the NAND layer for each extra bit added:

SLC = 10000 P/E Cycles
MLC = 3000 P/E Cycles
TLC = 1000 P/E Cycles
QLC = 300 P/E Cycles

Those numbers are not the number of drive writes, since write-amplification is offset by compression, thin-provisioning, over-provisioning, and any other controller/firmware magic that enters the picture for any given vendor/controller combination.
Are you sure current TLC is that low? thats the same as it was planar TLC, 3D TLC should have bumped that up, unless these manufacturers have started overshrinking the nodes again.

Sadly NVME drives dont report the cycles, so can only get data from SATA, my 860 EVO is rated for more than a 1000 and I would assume things have progressed since then.

https://www.techpowerup.com/ssd-specs/samsung-860-evo-250-gb.d6 this claims its 7000.

A newer drive SN850X rated at 3000 which is what I have commonly read about 3D TLC, it kind of reached where planar MLC was.


My guess is modern QLC might have reached 1000 by now?
 
Joined
Feb 20, 2019
Messages
8,339 (3.91/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
Are you sure current TLC is that low? thats the same as it was planar TLC, 3D TLC should have bumped that up, unless these manufacturers have started overshrinking the nodes again.
No, I think I was confusing drive fills rather than P/E cycles from the Anandtech in-depths on TLC and post-Anand dives into QLC.

Lex has the numbers in a later post for raw NAND writes, which are (as mentioned) very different numbers to drive writes.

The thing is, the numbers are very vague and ballpark. Here's the SN850X you linked:

1727340257112.png


is it 1700 P/E cycles or 6000 P/E cycles? The specs are a crapshoot and the only way to know for sure is to exhaust the cycles until the drive breaks.
 
Last edited:
Joined
Feb 1, 2019
Messages
3,666 (1.70/day)
Location
UK, Midlands
System Name Main PC
Processor 13700k
Motherboard Asrock Z690 Steel Legend D4 - Bios 13.02
Cooling Noctua NH-D15S
Memory 32 Gig 3200CL14
Video Card(s) 4080 RTX SUPER FE 16G
Storage 1TB 980 PRO, 2TB SN850X, 2TB DC P4600, 1TB 860 EVO, 2x 3TB WD Red, 2x 4TB WD Red
Display(s) LG 27GL850
Case Fractal Define R4
Audio Device(s) Soundblaster AE-9
Power Supply Antec HCG 750 Gold
Software Windows 10 21H2 LTSC
I got it from.

Endurance:
(up to)
3000 P/E Cycles
(100000 in SLC Mode)

Its in the main table above the notes.

In the past when people tested SSDs to exhaustion they found most models vastly exceeded rated specification. I dont know how it could be tested on NVME drives as sadly they no longer report the number of current cycles on the drives, we now have a dumbed down TBW instead.
 
Last edited:

bug

Joined
May 22, 2015
Messages
13,843 (3.95/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
I got it from.



Its in the main table above the notes.

In the past when people tested SSDs to exhaustion they found most models vastly exceeded rated specification. I dont know how it could be tested on NVME drives as sadly they no longer report the number of current cycles on the drives, we now have a dumbed down TBW instead.
How can you report p/e cycles? That's a per-cell metric.

And yes, it makes sense it would exceed expectations, it's the same as the warranty for any product: you determine it statistically and set it so that you only have to deal with a manageable number of repairs. It's just that storage tends to be a bit more sensitive subject than, say, a vacuum cleaner.
 
Joined
Feb 1, 2019
Messages
3,666 (1.70/day)
Location
UK, Midlands
System Name Main PC
Processor 13700k
Motherboard Asrock Z690 Steel Legend D4 - Bios 13.02
Cooling Noctua NH-D15S
Memory 32 Gig 3200CL14
Video Card(s) 4080 RTX SUPER FE 16G
Storage 1TB 980 PRO, 2TB SN850X, 2TB DC P4600, 1TB 860 EVO, 2x 3TB WD Red, 2x 4TB WD Red
Display(s) LG 27GL850
Case Fractal Define R4
Audio Device(s) Soundblaster AE-9
Power Supply Antec HCG 750 Gold
Software Windows 10 21H2 LTSC
How can you report p/e cycles? That's a per-cell metric.

And yes, it makes sense it would exceed expectations, it's the same as the warranty for any product: you determine it statistically and set it so that you only have to deal with a manageable number of repairs. It's just that storage tends to be a bit more sensitive subject than, say, a vacuum cleaner.
I dont know the metrics behind how it is calculated but SATA SSDs report current erase cycles in the SMART info. Whether thats an average across all cells, peak value, or they wait until every cell has exceeded that value no idea.
 
Joined
Jul 5, 2013
Messages
28,257 (6.75/day)
My guess is modern QLC might have reached 1000 by now?
That would be an improvement. What would be better would be upwards of 1200 to 1300.

The thing is, the numbers are very vague and ballpark.
This is because exact numbers are all over the place. They vary (greatly in some cases) from manufacturer to manufacturer and even from NAND to NAND die design within the same manufacturer. The raw numbers are all over the place. And the catch is, they're only estimates based on brief testing and projected longevity expected from the chemistry, lithography yields and real-world condition expectations.

At the end of the day, general numbers are all we can settle on as it's the best info we can reasonably expect. It's a pathetically terrible situation, one that wouldn't exist if companies were truly honest about that data, or were legally compelled to be.
 

bug

Joined
May 22, 2015
Messages
13,843 (3.95/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
That would be an improvement. What would be better would be upwards of 1200 to 1300.


This is because exact numbers are all over the place. They vary (greatly in some cases) from manufacturer to manufacturer and even from NAND to NAND die design within the same manufacturer. The raw numbers are all over the place. And the catch is, they're only estimates based on brief testing and projected longevity expected from the chemistry, lithography yields and real-world condition expectations.

At the end of the day, general numbers are all we can settle on as it's the best info we can reasonably expect. It's a pathetically terrible situation, one that wouldn't exist if companies were truly honest about that data, or were legally compelled to be.
Yeah, well, what can you do? Release a new TV model only after you've run it for 5 years to prove it will indeed work for that long?
 
Joined
Jul 5, 2013
Messages
28,257 (6.75/day)
Yeah, well, what can you do? Release a new TV model only after you've run it for 5 years to prove it will indeed work for that long?
Exactly. The problem with the way NAND works is that voltage cycling thing I mentioned earlier. With TLC it's been manageable. With QLC, it's been a crap show until the last couple of years. Things have clearly improved, but reliability is still not something we want to trust as much. I'll never touch PLC(5bit).

What we really need is a replacement for NAND that is fast, stable and durable(MUCH MORE) while staying affordable..
 

bug

Joined
May 22, 2015
Messages
13,843 (3.95/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
Exactly. The problem with the way NAND works is that voltage cycling thing I mentioned earlier. With TLC it's been manageable. With QLC, it's been a crap show until the last couple of years. Things have clearly improved, but reliability is still not something we want to trust as much. I'll never touch PLC(5bit).

What we really need is a replacement for NAND that is fast, stable and durable(MUCH MORE) while staying affordable..
My money was on XPoint :(
 
Top