Cries into his soup, having built the home ZFS filestore out of the cheap Patriot M210 2TB SATA SSDs...
However I don't drive 'em over USB. They're on a 5 port PCIx4 SATA card.
I was surprised how hot these babies report themselves to be. I bought a myth that I would be fanless. Well, with a noctua 40mm to cool them I can't hear the fan although I have old ears. I'm mostly fanless I guess.
The amount of cache amazed me. I guess at the point your job is to write complex encoded block states, lots of RAM on the local device (for the FPGA or chipset doing the smarts, which I assume is itself a computer) doing this marshalling makes sense.
Its funny how "bad" we are as an industry, making rational choices. Like WHY do NVME SSD not implement TRIM? What is it about them that "TRIM" didn't make sense?
> Its funny how "bad" we are as an industry, making rational choices. Like WHY do NVME SSD not implement TRIM? What is it about them that "TRIM" didn't make sense?
Maybe CrystalDiskInfo is simplifying things (combining TRIM and the mentioned DEALLOCATE command?), but all my NVMe SSDs support TRIM according to it. And it would be really strange if NVMe didn't support any sort of trimming, as SSD performance and health heavily relies on it.
It could also be that what the author is observing is specific to macOS?
>All of my NVME SSD's support TRIM until I encrypt them and that's by design
Why is this "by design"? There's no reason why disk encryption can't be compatible with trim. Yes, there's a small metadata leak, but that's fine in the overwhelming majority of cases.
This [1] is the answer and I have no control over it. It works on LUKS2 but not dm-crypt plain which I use for external backups as it is headerless and does not expose cipher and hash or that there is explicitly encrypted data despite the fact one can infer. There was talk of removing the restriction and maybe everything I said is no longer correct by now. I have not tested it in a while. The discussion was around adding the option 'cryptsetup --allow-discards' for dm-crypt plain instead of just LUKS2.
In my experience with cheap-ish TLC SSDs, once you fill them over 80% you'll occassionally run into random performance issues that manifest in system freezes.
With QLC SSDs, once you get to this level, the freezes are constant - I'd recommend that no one buy a QLC SSD under any circumstances.
QLC is the new standard and shouldn't matter much for high capacities.
Both depend on running a SLC write cache, once that's full the performance drops significantly, a bit less so on TLC. Consumer ssds just aren't made for writes that cover a large portion of their capacity
Doesn't that feel like theft though? The drive should be advertised as its fully useful capacity, with x Gb of overage - like turbo in a car or on my electric blower. I know I can't use it all the time, but it's there in a pinch. Then people who have specific use cases can oversaturate their drives like they overclock their processors.
SSDs get this cheap exactly by sacrificing excess performance and endurance. You can get some of that back by treating your 2TB drive as a 1.5TB drive. For most people, the tradeoffs chosen by the manufacturer make more sense.
If a regulation like that is passed it would just result in asterisks and small print (* burst write speed 6500MB/sec, if SLC cache is filled due to non-stop bulk writing decreases to 1700MB/sec).
Not really - I feel like vendors are pushing towards stacking more chips on top of each other rather than stuffing more data into a cell.
QLC feels like a step too far - data retention, TBW lifetime and write speed after running out of SLC cache is horrible compared to TLC, for a 33% increase of capacity.
When SSDs are literally cheap as chips, I'd gladly pay the premium for TLC.
Another one I ran into was file and space fragmentation. With certain workloads you can generate an awful lot of fragmentation (in this case I was reading and writing the same massive file set over and over with small changes) and the drive performed appalling afterward a year of this workload. I fixed it by deleting all the files and trimming and replacing all the files but a defragmentation software also works. Its not anywhere near the problem it was with HDDs but its worth looking into at some point because Windows does not perform well when the files are very fragmented regardless of what the theoretical situation is meant to be.
+ Avoid SATA SSDs.
Cries into his soup, having built the home ZFS filestore out of the cheap Patriot M210 2TB SATA SSDs...
However I don't drive 'em over USB. They're on a 5 port PCIx4 SATA card.
I was surprised how hot these babies report themselves to be. I bought a myth that I would be fanless. Well, with a noctua 40mm to cool them I can't hear the fan although I have old ears. I'm mostly fanless I guess.
The amount of cache amazed me. I guess at the point your job is to write complex encoded block states, lots of RAM on the local device (for the FPGA or chipset doing the smarts, which I assume is itself a computer) doing this marshalling makes sense.
Its funny how "bad" we are as an industry, making rational choices. Like WHY do NVME SSD not implement TRIM? What is it about them that "TRIM" didn't make sense?
> Its funny how "bad" we are as an industry, making rational choices. Like WHY do NVME SSD not implement TRIM? What is it about them that "TRIM" didn't make sense?
Maybe CrystalDiskInfo is simplifying things (combining TRIM and the mentioned DEALLOCATE command?), but all my NVMe SSDs support TRIM according to it. And it would be really strange if NVMe didn't support any sort of trimming, as SSD performance and health heavily relies on it.
It could also be that what the author is observing is specific to macOS?
RE: TRIM vs DEALLOCATE, that seems to be a naming thing. Wikipedia confirms that it's indeed technically DEALLOCATE on nvme.
https://en.wikipedia.org/wiki/Trim_(computing)#NVM_Express
The controllers on many USB SATA adapters don’t support TRIM.
That I'm aware of, but the claim is that TRIM does not exist for NVMe drives.
All of my NVME SSD's support TRIM until I encrypt them and that's by design. Are you by chance using dm-crypt on your SSD's?
>All of my NVME SSD's support TRIM until I encrypt them and that's by design
Why is this "by design"? There's no reason why disk encryption can't be compatible with trim. Yes, there's a small metadata leak, but that's fine in the overwhelming majority of cases.
This [1] is the answer and I have no control over it. It works on LUKS2 but not dm-crypt plain which I use for external backups as it is headerless and does not expose cipher and hash or that there is explicitly encrypted data despite the fact one can infer. There was talk of removing the restriction and maybe everything I said is no longer correct by now. I have not tested it in a while. The discussion was around adding the option 'cryptsetup --allow-discards' for dm-crypt plain instead of just LUKS2.
[1] - https://wiki.archlinux.org/title/Dm-crypt/Specialties#Discar...
[dead]
In my experience with cheap-ish TLC SSDs, once you fill them over 80% you'll occassionally run into random performance issues that manifest in system freezes.
With QLC SSDs, once you get to this level, the freezes are constant - I'd recommend that no one buy a QLC SSD under any circumstances.
QLC is the new standard and shouldn't matter much for high capacities.
Both depend on running a SLC write cache, once that's full the performance drops significantly, a bit less so on TLC. Consumer ssds just aren't made for writes that cover a large portion of their capacity
Doesn't that feel like theft though? The drive should be advertised as its fully useful capacity, with x Gb of overage - like turbo in a car or on my electric blower. I know I can't use it all the time, but it's there in a pinch. Then people who have specific use cases can oversaturate their drives like they overclock their processors.
SSDs get this cheap exactly by sacrificing excess performance and endurance. You can get some of that back by treating your 2TB drive as a 1.5TB drive. For most people, the tradeoffs chosen by the manufacturer make more sense.
If a regulation like that is passed it would just result in asterisks and small print (* burst write speed 6500MB/sec, if SLC cache is filled due to non-stop bulk writing decreases to 1700MB/sec).
That's already the case today. All speeds are caveated with an "up to".
Not really - I feel like vendors are pushing towards stacking more chips on top of each other rather than stuffing more data into a cell.
QLC feels like a step too far - data retention, TBW lifetime and write speed after running out of SLC cache is horrible compared to TLC, for a 33% increase of capacity.
When SSDs are literally cheap as chips, I'd gladly pay the premium for TLC.
Another one I ran into was file and space fragmentation. With certain workloads you can generate an awful lot of fragmentation (in this case I was reading and writing the same massive file set over and over with small changes) and the drive performed appalling afterward a year of this workload. I fixed it by deleting all the files and trimming and replacing all the files but a defragmentation software also works. Its not anywhere near the problem it was with HDDs but its worth looking into at some point because Windows does not perform well when the files are very fragmented regardless of what the theoretical situation is meant to be.