SAS JBOD

Associate
Joined
25 Jun 2004
Posts
1,277
Location
.sk.dkwop.
We're evaluating some low cost hardware storage solutions to potentially replace some legacy arrays within our estate (several EMC cx4 arrays).

We have had great experience with DataCore sansymphony v, and will most likely use this on top of some commodity hardware. It can easily be zoned into the existing fibre Chanel network and allow us to migrate volumes off the EMC arrays. We don't have a massive amount of experience in jbod storage, we at least would have two identical servers with identical jbod attached storage and mirror volumes using sansymphony. We've seen a few jbod which have dual controllers, how does the failover between controllers work and what type of raid controllers support dual sas interfaces? Or do you connect using a hba and the jbod does the raid?
 
You could use an approved JBOD and Server 2012 R2 to build a Scale Out File Server - Aidan Finn has quite a bit on his blog eg here That scenario uses HBAs and leaves Windows to manage the (tiered) storage.

image4.png


We need fibre channel which rules out sofs but the above diagram illustrates nicely how dual sas jbod controllers would work so thanks for that. We want each node to be physically independent as we have a proven track record with sansymphony and know its rock solid with volume mirroring.

Well be using two identical servers with one or two jbods attached of mixed dusk types sas and sata to each node identically configured. Storage will be presented to hosts as fibre channel, we'll mirror every volume between these two nodes. Raid will protect against single disk failure and the mirroring for active/active access will provide node resiliency and performance. Well slam 256gb of ram into each one as its so cheap.

Thanks.
 
I think its a myth that firmware bugs and niggles don't impact the leading manufacturers, we've suffered greatly from bugs on EMC and Netapp. While both have been great a helping resolve issues a SAN is always connected to multiple systems true support may be easier but then it costs.

We've been quoted £330,000 for a pair of netapp heads, disks and shelves at £180K and a further £60k for 36 months worth of support. If we went a nexenta using jbod we'd have more disks, more iops for £150K. Sure netapp would reduce the costs when pushed (this was the first quote) but then so was the nexenta offering.

If the price was only a couple of £s difference I see your point but when its £ks of difference and you don't need to have the netapp/emc support / badge / plugins but you need mass storage there are otherways of doing it.

I'm all about using the right tool for the job, I'm an EMC fan, not so keen on netapp but I also love datacore SanSymphony V.

The problem with VSA / VSAN is that it's vm only, we need FC to present storage into Oracle RAC clusters, physical MS boxes, Unix etc... we've generally got one of something if it was made.
 
Code:
Physical Slot	Rough Capacity	Description
HDD0	4,000	Raid 5 Data (1)
HDD1	4,000	Raid 5 Data (2)
HDD2	4,000	Raid 5 Data (3)
HDD3	4,000	Raid 5 Data (4)
HDD4	4,000	Raid 5 Data (5)
HDD5	4,000	Raid 5 Parity (1)
HDD6	4,000	Raid 5 Data (1)
HDD7	4,000	Raid 5 Data (2)
HDD8	4,000	Raid 5 Data (3)
HDD9	4,000	Raid 5 Data (4)
HDD10	4,000	Raid 5 Data (5)
HDD11	4,000	Raid 5 Parity (1)
HDD12	600	Raid 5 Data (1)
HDD13	600	Raid 5 Data (2)
HDD14	600	Raid 5 Data (3)
HDD15	600	Raid 5 Data (4)
HDD16	600	Raid 5 Data (5)
HDD17	600	RAID 5 Data (6)
HDD18	600	RAID 5 Data (7)
HDD19	600	RAID 5 Data (8)
HDD20	600	Raid 5 Parity (1)
HDD21	600	Raid 5 Data (1)
HDD22	600	Raid 5 Data (2)
HDD23	600	Raid 5 Data (3)
HDD24	600	Raid 5 Data (4)
HDD25	600	Raid 5 Data (5)
HDD26	600	RAID 5 Data (6)
HDD27	600	RAID 5 Data (7)
HDD28	600	RAID 5 Data (8)
HDD29	600	Raid 5 Parity (1)
HDD30	600	Raid 5 Data (1)
HDD31	600	Raid 5 Data (2)
HDD32	600	Raid 5 Data (3)
HDD33	600	Raid 5 Data (4)
HDD34	600	Raid 5 Data (5)
HDD35	600	RAID 5 Data (6)
HDD36	600	RAID 5 Data (7)
HDD37	600	RAID 5 Data (8)
HDD38	600	Raid 5 Parity (1)
HDD39	600	Raid 5 Data (1)
HDD40	600	Raid 5 Data (2)
HDD41	600	Raid 5 Data (3)
HDD42	600	Raid 5 Data (4)
HDD43	600	Raid 5 Data (5)
HDD44	600	RAID 5 Data (6)
HDD45	600	RAID 5 Data (7)
HDD46	600	RAID 5 Data (8)
HDD47	600	Raid 5 Parity (1)
HDD48	600	RAID 10 Data (1)
HDD49	600	RAID 10 Data (2)
HDD50	600	RAID 10 Mirror (1)
HDD51	600	RAID 10 Mirror (2)
HDD52	600	RAID 10 Data (1)
HDD53	600	RAID 10 Data (2)
HDD54	600	RAID 10 Mirror (1)
HDD55	600	RAID 10 Mirror (2)
HDD56	600	Global Hotspare (1)
HDD57	600	Global Hotspare (2)
HDD58	4000	Global Hotspare (3)
HDD59	4000	Global Hotspare (4)

So we produced a very rough sizing (binary vs decimal) and ignore the sizing of the raid groups but the above works out to have 61600 for a little over £16K, this includes the cost of the shelf, the disks and the raid controller with 3 years on site support. We would have two of these to mirror the data between the jbods. That's 53p per GB, or £0.01 a month.

Sansymphony V licensing will increase that cost a lot mind.
 
Back
Top Bottom