SAS JBOD

Associate
Joined
25 Jun 2004
Posts
1,277
Location
.sk.dkwop.
We're evaluating some low cost hardware storage solutions to potentially replace some legacy arrays within our estate (several EMC cx4 arrays).

We have had great experience with DataCore sansymphony v, and will most likely use this on top of some commodity hardware. It can easily be zoned into the existing fibre Chanel network and allow us to migrate volumes off the EMC arrays. We don't have a massive amount of experience in jbod storage, we at least would have two identical servers with identical jbod attached storage and mirror volumes using sansymphony. We've seen a few jbod which have dual controllers, how does the failover between controllers work and what type of raid controllers support dual sas interfaces? Or do you connect using a hba and the jbod does the raid?
 
You could use an approved JBOD and Server 2012 R2 to build a Scale Out File Server - Aidan Finn has quite a bit on his blog eg here That scenario uses HBAs and leaves Windows to manage the (tiered) storage.
 
You could use an approved JBOD and Server 2012 R2 to build a Scale Out File Server - Aidan Finn has quite a bit on his blog eg here That scenario uses HBAs and leaves Windows to manage the (tiered) storage.

image4.png


We need fibre channel which rules out sofs but the above diagram illustrates nicely how dual sas jbod controllers would work so thanks for that. We want each node to be physically independent as we have a proven track record with sansymphony and know its rock solid with volume mirroring.

Well be using two identical servers with one or two jbods attached of mixed dusk types sas and sata to each node identically configured. Storage will be presented to hosts as fibre channel, we'll mirror every volume between these two nodes. Raid will protect against single disk failure and the mirroring for active/active access will provide node resiliency and performance. Well slam 256gb of ram into each one as its so cheap.

Thanks.
 
This seems as decent a place to ask as any - I've seen Microsoft pitch SOFS as some sort of SAN alternative for people on limited budgets, and that diagram above explains the whole thing a lot better than any documentation that MS managed to produce so far.

The bit I'm struggling with is while it might be cheaper than a 3PAR or EMC SAN or whatever, low end NetApp NAS appliances are already pretty cheap, and the Dell MD3220i is about 3p and can do SSD tiering now as well, and for that you get the dual controllers, redundancy, expansion etc in one box. From a very quick Google one of those DataOn trays is ~£3k and then you have all the finger pointing involved with the OS vendor blaming the SAS controller manufacturer or whoever made the physical box, and then blaming the DataOn shelf for being garbage etc.

I guess I'm struggling to see the market it's aimed at filling. If you were really really poor wouldn't you just just buy VSA when you buy vSphere Essentials Plus for your 3 hosts and fill them with disk?
 
Last edited:
I think its a myth that firmware bugs and niggles don't impact the leading manufacturers, we've suffered greatly from bugs on EMC and Netapp. While both have been great a helping resolve issues a SAN is always connected to multiple systems true support may be easier but then it costs.

We've been quoted £330,000 for a pair of netapp heads, disks and shelves at £180K and a further £60k for 36 months worth of support. If we went a nexenta using jbod we'd have more disks, more iops for £150K. Sure netapp would reduce the costs when pushed (this was the first quote) but then so was the nexenta offering.

If the price was only a couple of £s difference I see your point but when its £ks of difference and you don't need to have the netapp/emc support / badge / plugins but you need mass storage there are otherways of doing it.

I'm all about using the right tool for the job, I'm an EMC fan, not so keen on netapp but I also love datacore SanSymphony V.

The problem with VSA / VSAN is that it's vm only, we need FC to present storage into Oracle RAC clusters, physical MS boxes, Unix etc... we've generally got one of something if it was made.
 
I realise what you're saying and it all makes sense, but £100k+ aren't really the target of SOFS systems. Everything I've seen coming out of MS sees SOFS as a cost-reducing exercise for smaller deployments. My point was that I'm just not seeing the benefits of it over any of the other options that a budget-limited deployment would see.
 
I've been looking at SOFS as an alternative to shared SAS (HP P200 G3 / MSA 2040) or Dell MD3xxxx for 2 or 3 node virtualization clusters. SOFS does look have to some cost benefits (mainly not paying silly money for HP / Dell HDs) but then there are the overheads of managing the Windows installs.

There's another Aidan Finn blog entry here which shows what MS are internally doing with SOFS

They switched to WS2012 R2 with SOFS architectures:

20 x WS2012 R2 clustered file servers provide the SOFS HA architecture with easy manageability.
20 x JBODs (60 x 3.5″ disk slots) were selected. Do the maths; that’s 20 x 60 x 4 TB = 4800 TB or > 4.6 petabytes!!! Yes, the graphic says they are 3 TB drives but the text in the paper says the disks are 4 TB.

There is an aggregate of 80 Gbps of networking to the servers. This is accomplished with 10 Gbps networking – I would guess it is iWARP.
 
Last edited:
There's still a list of approved disks for each enclosure (and I'd assume you need to compare this with the approved disks for each controller and buy where there's an overlap) but granted you aren't paying SAN prices for them.
 
Indeed.

I've just looked at one of my trade distributors and a ST600MM0026 (Savvio 10k.6 600GB) is £161+VAT.

A HP 600GB SFF SAS 10k (C8S58A) is £342+VAT, the same spec' disk for a ProLiant DL380p G8 is £337+VAT (both from the same distributor). Buy the ProLiant HD from an "independent" distributor and the price comes down to ~£180+VAT.
 
Code:
Physical Slot	Rough Capacity	Description
HDD0	4,000	Raid 5 Data (1)
HDD1	4,000	Raid 5 Data (2)
HDD2	4,000	Raid 5 Data (3)
HDD3	4,000	Raid 5 Data (4)
HDD4	4,000	Raid 5 Data (5)
HDD5	4,000	Raid 5 Parity (1)
HDD6	4,000	Raid 5 Data (1)
HDD7	4,000	Raid 5 Data (2)
HDD8	4,000	Raid 5 Data (3)
HDD9	4,000	Raid 5 Data (4)
HDD10	4,000	Raid 5 Data (5)
HDD11	4,000	Raid 5 Parity (1)
HDD12	600	Raid 5 Data (1)
HDD13	600	Raid 5 Data (2)
HDD14	600	Raid 5 Data (3)
HDD15	600	Raid 5 Data (4)
HDD16	600	Raid 5 Data (5)
HDD17	600	RAID 5 Data (6)
HDD18	600	RAID 5 Data (7)
HDD19	600	RAID 5 Data (8)
HDD20	600	Raid 5 Parity (1)
HDD21	600	Raid 5 Data (1)
HDD22	600	Raid 5 Data (2)
HDD23	600	Raid 5 Data (3)
HDD24	600	Raid 5 Data (4)
HDD25	600	Raid 5 Data (5)
HDD26	600	RAID 5 Data (6)
HDD27	600	RAID 5 Data (7)
HDD28	600	RAID 5 Data (8)
HDD29	600	Raid 5 Parity (1)
HDD30	600	Raid 5 Data (1)
HDD31	600	Raid 5 Data (2)
HDD32	600	Raid 5 Data (3)
HDD33	600	Raid 5 Data (4)
HDD34	600	Raid 5 Data (5)
HDD35	600	RAID 5 Data (6)
HDD36	600	RAID 5 Data (7)
HDD37	600	RAID 5 Data (8)
HDD38	600	Raid 5 Parity (1)
HDD39	600	Raid 5 Data (1)
HDD40	600	Raid 5 Data (2)
HDD41	600	Raid 5 Data (3)
HDD42	600	Raid 5 Data (4)
HDD43	600	Raid 5 Data (5)
HDD44	600	RAID 5 Data (6)
HDD45	600	RAID 5 Data (7)
HDD46	600	RAID 5 Data (8)
HDD47	600	Raid 5 Parity (1)
HDD48	600	RAID 10 Data (1)
HDD49	600	RAID 10 Data (2)
HDD50	600	RAID 10 Mirror (1)
HDD51	600	RAID 10 Mirror (2)
HDD52	600	RAID 10 Data (1)
HDD53	600	RAID 10 Data (2)
HDD54	600	RAID 10 Mirror (1)
HDD55	600	RAID 10 Mirror (2)
HDD56	600	Global Hotspare (1)
HDD57	600	Global Hotspare (2)
HDD58	4000	Global Hotspare (3)
HDD59	4000	Global Hotspare (4)

So we produced a very rough sizing (binary vs decimal) and ignore the sizing of the raid groups but the above works out to have 61600 for a little over £16K, this includes the cost of the shelf, the disks and the raid controller with 3 years on site support. We would have two of these to mirror the data between the jbods. That's 53p per GB, or £0.01 a month.

Sansymphony V licensing will increase that cost a lot mind.
 
Back
Top Bottom