NAS -v- Server - Small file performance

Associate
Joined
22 Jun 2018
Posts
1,706
Location
Doon the watah ... Scotland
Not really sure if this should be in Server, Networking or Storage, but:

I currently have a WD EX2 NAS. Its a good few years old now, but it has the capacity space wise i need, however its just woefully slow in terms of small file performance when doing a lot of backups of small files. It just seems to bog right down and churn.

On a single larger file ( ISO etc), I'll get an acceptable transfer rate that you would expect on 1Gbit network. I'm happy there is nothing wrong with my general network infrasture in terms of switches and cabling.

Ignoring the powerusage aspects at this time:


Would I generally get a better small file performance from a proper server device that provides network storage ? I just really need to be able to transfer lots of sub 1mb files on a regular basis between devices.

I was thinking along the lines of a cheap Dell R710 - either freenas or linux zfs etc etc ( to be decided as a wee hobby project ).

Would that sort of higher cpu power help in such circumstances ?
 
Hadn't really thought about what I would consider acceptable I suppose ... I'm not really sure how to quantify it. I had left a transfer running for a while there as part of playing around. Very roughly: 3hrs 20mins, 15Gb, 2300files ( files were probably split between sub 1MB files and a lot of ~20Mb RAW photo files)

A straight average filesize of 6.5MB and transferring in the region of about 1.25MB / second ... so on average not even a file per second ?

I was hoping for more than that across the network. I do accept that small files will impact on performance, but still ... I'd like it to be better than that.


As for NAS device versus full server hardware, still to be decided, but i'm just coming the lines of thinking or asking myself that if I were remove any sort of cpu bottle neck, would that really open up the transfer speeds ... or would I me limited by other things.

I had read once that typically due to seek times of HDD's and network protocol overhead, you would be lucky to see better than 3 to 4 files completed per second ... I dont seem to be close to that based on the above figures.
 
It's use is in a bit of limbo at the moment. I was using it for backup type use for a long time, and it worked ok once it was doing incremental syncs.

I've since added an off site backup setup so I was looking at using it more as a central place to keep stuff between machines in the house for quick access but the slower transfers when messing around with it is putting me off a bit. The more I think about it, the more I feel I'm looking for an excuse to use it to an extent.

Question still stands though whether a higher power setup would likely give a better small file throughput.
 
Last edited:
Just thought I would follow up on this. The NAS is now working a lot better. My issue was that I had fully reset the NAS... this re-enabled a few things on the unit which I had previously disabled, in particular background processes that scanned files and generated hidden thumbnails for files it finds on the machine.

So whilst in the process of transferring lots of small files onto the NAS, it was in parallel trying to scan and generate thumbnails of the same ... what a crap setup - its well documented on the net, its been like that for years, and WD seem unable to recognise this or provide an option on the web GUI to control it. There is information out there on how to disable these things ... you have to SSH into the machine and type in a few commands ... perfectly doable, but something that I shouldn't really need to have to do in the first place as it truely cripples performance.

Another thing I played around was jumbo frames on the network ... in short, whilst I could enable them on the network, the throughput of the transfer was reduced compared to default sizes.

So now working much better as I say. Currently now transferring files across to it as a mess around.

In the space of 20 minutes, I've transferred 4000+ files totalling more than 44Gb ... which is a huge increase in performance over before. ( >3hrs hrs for 2300files totalling 15Gb )


Edit: - spoke too soon, its dragging its backside again .... sigh !
 
Last edited:
It's looking like it at this rate !

That being said, beginning to wonder if it's a HDD issue. It's cripplingly slow at times depending on what folder is being written to or read from, yet others not so. In explorer it will show 'not responding' when just navigating folders at time too. In the same way that it stalls when you put a dvd in a drive and the drive cant read it and keeps attempting to do so.

... Onwards and downwards. !


edit: 24 hrs later ... sigh

Getting to the bottom of this, finally through learning some linux commands to see whats going on in the NAS drive. The simple answer is that there are processes that start and scan the hard drivers for media etc ... I dont know how often it scans, but it seems to be a re-starting process.

So when adding lots of files, the process kicks in and starts scanning all the new directories. It uses about 90% of the CPU time to do so, leaving no capacity for other operations. As soon as the process is stopped, the NAS is working like a dream.

I just need to find out how to kill that process all the time now.
 
Last edited:
You can stop the index process once it’s booted. I’ve even gone into the script that envokes it and added added a line to make the script immediately exit.

That’s all well and good until where the nas gets rebooted or a setting changed where the indexing may be used though because in those situations it seems to restore its script files from a hidden original which means my changes are all undone.

I’ve been playing with it for a while and I think it’s time to change. Even with the index disabled the transfer rate is slowing down over time and I can’t seem to trace why.
 
Back
Top Bottom