Waste of time trying to do anything else than replace drive.Not a lot I can do about this except try sanitising the drive (which apparently works in some cases).
it could be QLC drive and might be suffering from similar what Samsung's planar TLC drives 840/840 Evo had:
Because of minimal charge degradation tolerance, data in cells starts evaporating after enough time from writing.
And that causes bit/cell level errors causing error correction to work over time.
Those Samsungs also started slowing down from that.
Samsung "fixed" that by firmware update making controller refresh data periodically by rewriting it.
And while not actually doing anything usefull with SSDs, that defragging forced drive to rewrite data it "defragged".
That would lower number of cell read errors returning read speed back toward original.
But at the same time those writes wear down cells more making charge degradation faster.
There haven't been MLC drives available on consumer markets in years.Might stick to MLC (2 bits per cell) drives in the future, if these can still be had for reasonable money, this is what my Crucial 240GB OS drive has and its been good for about 5 years.
For the record, I switched to a fairly new + cheap 500gb Lexar SSD drive with no read errors and it handles the game perfectly, note that it has no DRAM (and is problem free), so this was totally crap advice from some certain ppl!
There haven't been MLC drives available on consumer markets in years.
Or if there are, those are super expensive.
Anyway 3D NAND with vertically stacked layers of transistors makes TLC basically comparable to tiny transistor planar (single layer) MLC.
Just have to stick to known good TLC drives, like MX500 from Crucial.
DRAM has little effect to normal home use read focused work loads.
No drive has big enough DRAM to cache much of even smaller game and read performance depends lot more on performance of controller and NAND.
Heck, Windows can cache lot more stuff into unused RAM!
Where DRAM helps is high IOPS write and mixed work loads.
But anyway DRAMless drive is still lower level product than DRAM equipped drive and that should always show in price.
Might as well do an intensive defrag at this point, to see if that helps!
Yep.So, 3D NAND + TLC = generally fine? But avoid single layer TLC?
@james.miller already explained this on the second page, there's so much misinformation in this thread now it's probably best to just let them get on with it.You don't need to defragment SSDs, and doing so puts significant strain on the drive and can leads to the kinds of errors and performance issues you're seeing.
A fair chunk of misinformation by yourself about QLC drives (which aren't even relevant to this thread)@james.miller already explained this on the second page, there's so much misinformation in this thread now it's probably best to just let them get on with it.
MX500 is a quality drive, I replaced my cacheless drive with it and all the problems disappeared.I'm gonna try to swap the drive at a certain shop anyway, probably go for a Samsung 860 Pro 2TB (it has 64-layer 2bit MLC V-NAND memory). Another option is the Crucial MX500, I'm sure either would be good.
EDIT - S*** I've just noticed that it's out of stock.
Any thoughts about the Crucial MX500 2TB? Is this the most reliable TLC 3D NAND drive?
I just copied 126 1GB files to and from my two thirds full 1TB MX500, both ways the speeds were about 450-500MB/s for the whole test. I am actually a little surprised that it managed the write test without running out of SLC cache but I reckon it just managed it as the free space was just a little over 3x larger than the test size. It also has a DRAM cache so no need to wait for a first read to find where data is stored before doing a second read to get the data.How about for sustained read / write speed? Any problem there?
It run out of SLC cache.I just copied 126 1GB files to and from my two thirds full 1TB MX500, both ways the speeds were about 450-500MB/s for the whole test. I am actually a little surprised that it managed the write test without running out of SLC cache but I reckon it just managed it as the free space was just a little over 3x larger than the test size.