I've mentioned this a few times in other threads, but I thought I'd start a new topic to try and explain myself further and invite discussion.
I believe that the emphasis placed on 4k performance in current SSD's is overhyped for 99% of people.
This is most probably due to the bad old jMicron first gen era, where it was quite rightly highlighted that the drives suffered in general use thanks to 4k write performance that was significantlyworse than conventional hard drives. This cause a lot of emphasis in benchmarking to be placed on 4k performance as it was now linked in our minds that 4k performance was what really mattered.
If we take a realistic look at things however, as all the software we run was - and will be for a considrable amount of time - designed around the limitations of mechanical drives, our PC's don't actually spend that much time as it is on them.
There are diminishing returns on the percieved performance improvement with 4k IOPS.
This is similar to how if you see two graphics card benchmarks, one has a minimum fps of 100FPS in any game max settings, the other has a minimum fps of 200FPS with max settings, to play them you wouldn't feel any difference. Of course, if price and all other factors were the same between them, you'd go for the 200FPS card, becasue it would be more futureproof, but the SSD world is more complex than that, firstly 4k performance demands are limited thanks the the prevailance of mechanical drives. They are going to be with us for a very long time becasue NAND SSD's are unlikely to ever reach mainstream pricing, and developers have to consider the lowest common denomonator.
Secondly, Sequential performance is always noticable - halving the time it takes to load a few GB of photoshop files or a game level is very perceptable.
Taking some numbers from the traces done for AnandTech's "Heavy Workload" SSD test which states:
We can extrapolate that in every second activity, and assuming the 30% 4k accesses evenly split between reads and writes, we average
(.3*((60/22)*72411))/3600 =
16.46 4k writes
G
(.3*((60/22)*128895))/3600 =
29.3 4k reads
I drew up a table for a few common drives, using 4k MB/s numbers from crystaldiskmark and legitreviews:
As you can see, on mechanical drives, you are spending hundreds of ms on 4k ops, wheras moving to even on a "value" SSD like the Kingston SSDNow v+ series you are spending less than 10ms on either.
To put this into context, at 60FPS there is 16ms between frames and it feels smooth to the most discerning gamer, TV and movies get by on 24fps, over 40ms between frames. It's safe to say that you won't notice that 10ms worth of delay spread over a second. In fact in an hour an ssd like the common indilinx is only spending 15seconds on 4k reads or writes, and the top of the range c300 only saves you around 10seconds per hour on writes, or 3 seconds on reads.
Of course, these are average values, and there will obviously be peaks in activity, but they demonstrate my thinking quite well. In addition, the time your drive spends on operations is even less noticable becasue many will be performing in the background whilst your attention is occupied by other things - you don't care what your web browser does in the background whilst you are reading a web page. Conversely, many sequential operations (most program and file loading, copying files, loading games, are things that you are waiting to complete before you can proceed ... though of course there are exceptions here too, such as watching a movie where you don't care about the constant disk activity becasue your attention is on the movie
This makes it almost impossible to design a realistic benchmark, all i can really say is that you'd be hard pressed to detect any difference in practice based on 4k performance alone amongst any of the current crop of ssd's.
TLDR: Unless you're running a hardcore database, even under heavy multitasking use your drive doesn't spend much time on 4k ops anyway, so as long as it's a current gen drive, look to sequential numbers for perceptible improvement.
I believe that the emphasis placed on 4k performance in current SSD's is overhyped for 99% of people.
This is most probably due to the bad old jMicron first gen era, where it was quite rightly highlighted that the drives suffered in general use thanks to 4k write performance that was significantlyworse than conventional hard drives. This cause a lot of emphasis in benchmarking to be placed on 4k performance as it was now linked in our minds that 4k performance was what really mattered.
If we take a realistic look at things however, as all the software we run was - and will be for a considrable amount of time - designed around the limitations of mechanical drives, our PC's don't actually spend that much time as it is on them.
There are diminishing returns on the percieved performance improvement with 4k IOPS.
This is similar to how if you see two graphics card benchmarks, one has a minimum fps of 100FPS in any game max settings, the other has a minimum fps of 200FPS with max settings, to play them you wouldn't feel any difference. Of course, if price and all other factors were the same between them, you'd go for the 200FPS card, becasue it would be more futureproof, but the SSD world is more complex than that, firstly 4k performance demands are limited thanks the the prevailance of mechanical drives. They are going to be with us for a very long time becasue NAND SSD's are unlikely to ever reach mainstream pricing, and developers have to consider the lowest common denomonator.
Secondly, Sequential performance is always noticable - halving the time it takes to load a few GB of photoshop files or a game level is very perceptable.
Taking some numbers from the traces done for AnandTech's "Heavy Workload" SSD test which states:
If there’s a light usage case there’s bound to be a heavy one. In this test we have Microsoft Security Essentials running in the background with real time virus scanning enabled. We also perform a quick scan in the middle of the test. Firefox, Outlook, Excel, Word and Powerpoint are all used the same as they were in the light test. We add Photoshop CS4 to the mix, opening a bunch of 12MP images, editing them, then saving them as highly compressed JPGs for web publishing. Windows 7’s picture viewer is used to view a bunch of pictures on the hard drive. We use 7-zip to create and extract .7z archives. Downloading is also prominently featured in our heavy test; we download large files from the Internet during portions of the benchmark, as well as use uTorrent to grab a couple of torrents. Some of the applications in use are installed during the benchmark, Windows updates are also installed. Towards the end of the test we launch World of Warcraft, play for a few minutes, then delete the folder. This test also takes into account all of the disk accesses that happen while the OS is booting.
The benchmark is 22 minutes long and it consists of 128,895 read operations and 72,411 write operations. Roughly 44% of all IOs were sequential. Approximately 30% of all accesses were 4KB in size, 12% were 16KB in size, 14% were 32KB and 20% were 64KB. Average queue depth was 3.59.
We can extrapolate that in every second activity, and assuming the 30% 4k accesses evenly split between reads and writes, we average
(.3*((60/22)*72411))/3600 =
16.46 4k writes
G
(.3*((60/22)*128895))/3600 =
29.3 4k reads
I drew up a table for a few common drives, using 4k MB/s numbers from crystaldiskmark and legitreviews:
As you can see, on mechanical drives, you are spending hundreds of ms on 4k ops, wheras moving to even on a "value" SSD like the Kingston SSDNow v+ series you are spending less than 10ms on either.
To put this into context, at 60FPS there is 16ms between frames and it feels smooth to the most discerning gamer, TV and movies get by on 24fps, over 40ms between frames. It's safe to say that you won't notice that 10ms worth of delay spread over a second. In fact in an hour an ssd like the common indilinx is only spending 15seconds on 4k reads or writes, and the top of the range c300 only saves you around 10seconds per hour on writes, or 3 seconds on reads.
Of course, these are average values, and there will obviously be peaks in activity, but they demonstrate my thinking quite well. In addition, the time your drive spends on operations is even less noticable becasue many will be performing in the background whilst your attention is occupied by other things - you don't care what your web browser does in the background whilst you are reading a web page. Conversely, many sequential operations (most program and file loading, copying files, loading games, are things that you are waiting to complete before you can proceed ... though of course there are exceptions here too, such as watching a movie where you don't care about the constant disk activity becasue your attention is on the movie
This makes it almost impossible to design a realistic benchmark, all i can really say is that you'd be hard pressed to detect any difference in practice based on 4k performance alone amongst any of the current crop of ssd's.
TLDR: Unless you're running a hardcore database, even under heavy multitasking use your drive doesn't spend much time on 4k ops anyway, so as long as it's a current gen drive, look to sequential numbers for perceptible improvement.
Last edited: