Enterprise file services (home areas etc)

Soldato
Joined
26 Nov 2002
Posts
6,852
Location
Romford
So we are looking at updating our file services. We have about 10TB of file data spread across multiple sites, multiple VM's currently.

Some like smaller, manageable, standard VM's to store users data. (like we have already)

Others are wanting appliances, NAS headers and the like to publish SMB.

Then there's the Microsoft storage spaces solution, a cluster of physicals attached to some JBODs that we publish shares from.

Lots of different options with pro's and con's for each one. Just wondering what people like you lot `out there` are moving towards or what to avoid.

Cheers,
 
10TB doesn't isn't all that much data, especially when you consider its currently spread around multiple sites.

What are your goals here?
 
Any sort of proper NAS. Using Windows servers for file sharing is just so much hassle.
 
A few votes for Netapp I see.

The main concern with Windows file servers is that NTFS doesn't scale and the problem of long chkdsks comes into play with large volumes. We can distribute volumes using mount mounts and DFS but it adds complexity.

If we went the netapp route, would spending the extra buying a `solution` from a supplier, including the design and documentation be wiser than just buying the required tin and configuring it our ourselves? I'm wary now with new systems when nobody has any experience of the kit, it just takes too long implement and inexperienced, first efforts in these designs always fall short.
 
Personally, I would just go with something like one of Synology's Enterprise NAS solutions. Much easier to administer than using servers, etc.

I was looking for something some time ago to allow access from multiple locations. We use Mac's so initially I tried a Mac Mini server. In the end I ditched it and got a Synology NAS and it was much simpler to use from an admin point of view.
 
Thanks, but from experience, Synology / QNAP are not really enterprise products - ok for a backup repository or something, but not live data that needs performance and 99.999 uptime.
 
Synology products need to be taken offline to do a firmware update, they are not enterprise storage.

This inconvenience is compounded by the almost bi-weekly DSM updates patching up glaring security holes in the system for services that you don't need, can't turn off, but can be exploited.
 
NetApp are OK, but there is a fair learning curve with them and they are not the easiest of devices to administer. The hardware reliability and support is good, but I think you'll find it expensive compared to other solutions. I would 100% recommend buying services to deploy a NetApp if you've never had one before, and I generally prefer to do everything myself.

CIFS performance on NetApp is not great IME. We've got a FAS2240 with 24 SATA drives doing CIFS and it never seems to better 60Mb/Sec. Whilst they will do AD integration, they are not a full domain member. This hasn't bothered us until we did a big job rolling out AD Managed Service Accounts for a lot of services running on our servers and found that the NetApp doesn't understand them so you can give an MSA write permission on a CIFS share on the filer. NetApp support said it is supported in Clustered Data OnTap, but we're in 7-mode. How do we upgrade? Delete all the data and start again! Not happening. We can't do SMB3 and there's some issue with enforcing server signing on the NetApp as well that keeps popping up on our pen tests. Apparently these too are fixed in clustered OnTap, but that's no good to me. This kit cost nearly £100k and is only 2 years old. I'd expect a non-destructive update path from enterprise kit really.

Another thing to bear in mind is that a NetApp will terminate all the CIFS sessions in the event of a controller failover/takeover. This is probably more CIFS' fault then NetApp's due to the stateful nature of CIFS, but it's worth bearing in mind. Upgrades need to be done out of hours, and you'll have CIFS drop twice because it hands over and takes back once the upgrade is complete. Server 2012 R2 uses a witness protocol which keeps CIFS sessions alive even if the underlying storage drops, provided there is a replica.

You should see good dedupe and compression savings with a NetApp, and you can run hourly snapshots which integrate with Windows shadow copies so we allow users to view their last 7 days worth of snapshots with hourly's during the working day so when the dozy gits delete all their files and don't notice for 3 days they can restore them themselves or at least get the Service Desk to do it for them. Can't remember the last time I had to get a user file off a tape. NDMP backups are good on the NetApp too.

We're looking at ditching NetApp and investing Windows Storage Server and Server 2012 R2 solutions with DFS. We've about 10Tb of data too. A lot of experienced admins won't even look at Windows file servers these days, but there's lots of stuff in 2012 R2 that makes it attractive. We've not decided whether we'll use iSCSI storage or just local disk if we got the down the server route.
 
I would agree and disagree with blue boy.

Our pair of 2240-4 with a second shelf offer fantastic CIFS performance. And I find them really easy to administer once they are setup. As long as they are setup well. Definitely get a NetApp partner for the design and configuration, but again I would prefer to do it myself, time doesn't allow. VM's running through a flash pool are really good. We're on our second pair, and if I'm still with my current employer in a few years we will get a new pair.
 
Storage is a buyers market now as well. Get some people in to do demos and leave kit with you for a couple of weeks. If they won't do it, don't buy from them.
 
Our pair of 2240-4 with a second shelf offer fantastic CIFS performance.

That's a large configuration and won't have come cheap. With flash in there, it should fly.

To be clear, the day to admin of a NetApp - volume management, shares snapshots etc are no more difficult that anything else.

The dark art with them is foreseeing, diagnosing and fixing performance problems. These usually only start to become apparent when your aggregates are filling up and you're putting a reasonable amount of IOPS through them. Without warning, they can be working just fine one day and the next the performance tanks when you delete a snapshot from a VM because the disk latency goes through the roof. By the time NetApp have responded to your support call, it's alright again.

One other thing I forgot to point out with NetApp is physical disk space doesn't translate into usable storage. 24x 2Tb disks only gives about 20Tb usable. You should treat the array is full when the aggregates are 80% capacity because you need free space when upgrading and the performance drops off considerably because the WAFL filesystem relies on turning random writes into sequential writes which becomes more difficult as the disks fill up.

An over specified/lightly used NetApp will be as reliable as they come, but you'll hit problems if you push them in terms of capacity and IOPS. Another shelf would probably sort ours out, but our reseller has no interest in selling us one as they really want to punt us another array.
 
blueboy2001 said:
We're looking at ditching NetApp and investing Windows Storage Server and Server 2012 R2 solutions with DFS. We've about 10Tb of data too. A lot of experienced admins won't even look at Windows file servers these days, but there's lots of stuff in 2012 R2 that makes it attractive. We've not decided whether we'll use iSCSI storage or just local disk if we got the down the server route.
The last deployment I did was moving an entire estate over to using HP c3000 Blade Server chassis', a mix of Server and Server Graphics blades, a few cheapo HP DAS arrays (FC, SAS, 10GbE where applicable), and Windows Scale-Out Fileserver Cluster plus DFS for replication. If they had ponied up for more enclosures, faster disks and more SSDs it would have been even better but there's always the chance to add things later. They are running Sharepoint 2013, Exchange 2013 hybrid and a bunch of net-facing services quite happily from it for ~1700 staff. On top of that they're leveraging 2012 R2 VDIs and User Profile Disks with Wyse terminals for ~100 users - very very fast and apparently super cost effective in terms of licensing too.

The change in mindset from 'This chassis does X activity' to 'All the chassis are dumb as bricks, let the software do the work' takes a while for people to accept but it is a sensible way of doing things. You do lose some of the fancier features that mature storage systems offer and your needs will vary but keep an open mind. You may end up saving quite a lot of money.

The main qualification I would have for that approach is that your backups need to be robust above all else. We moved them over to Veeam for the VMs and Windows ServerBackup is sufficient for everything else.

I don't know what kind of budget you have but right now I would hold fire until Vmware ESX 6 arrives on the 2nd of February (I think? Maybe the 6th?) - what I've read and seen makes me think vSAN is going to be a real player in the space. Even then you'll still need Windows on top to do your business (presuming you're not a linux shop) so that has to factor into cost considerations which are not small given the node requirements of vSAN.
 
Last edited:
Back
Top Bottom