Help me reat my ZFS pool please -- Open Solaris

Associate
Joined
15 Jan 2003
Posts
662
Sorry if this should be in the harddrive sub forum but it seemed to fit here better.

I am in the process of building a file server, which i was recommended to try a ZFS file system;

It consists of a 3Ware 3650SE-12 Port Raid Controller and 5*1.5Tb WD Greens in a raid 5 array, what i would trying to achieve is a single partition of approx. 7.5GB.

I have configured the raid controller and drives ( I assume its sitting there un formatted??) but now need to create a ZFS pool on the drive (a separate pool to the OS drive i should think).

The example given here is this;
# zpool create tank c1t2d0

I can just about grasp the command however
Whats this c1t2d0? how can i point it to my single raid array?

--EDIT
Sorry about the miss-spelt title, obviously it should be create
 
Last edited:
Sorry if this should be in the harddrive sub forum but it seemed to fit here better.

I am in the process of building a file server, which i was recommended to try a ZFS file system;

It consists of a 3Ware 3650SE-12 Port Raid Controller and 5*1.5Tb WD Greens in a raid 5 array, what i would trying to achieve is a single partition of approx. 7.5GB.

I have configured the raid controller and drives ( I assume its sitting there un formatted??) but now need to create a ZFS pool on the drive (a separate pool to the OS drive i should think).

The example given here is this;
# zpool create tank c1t2d0

I can just about grasp the command however
Whats this c1t2d0? how can i point it to my single raid array?

--EDIT
Sorry about the miss-spelt title, obviously it should be create


Want to know what c1t2d0 is?
My Terminal... said:
tom@office-unx-ws01:~$ pfexec format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
0. c5d0 <DEFAULT cyl 60797 alt 2 hd 255 sec 126>
/pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0
1. c5d1 <drive type unknown>
/pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0
2. c6d0 <drive type unknown>
/pci@0,0/pci-ide@1f,2/ide@1/cmdk@0,0
3. c6d1 <drive type unknown>
/pci@0,0/pci-ide@1f,2/ide@1/cmdk@1,0
Specify disk (enter its number): ^C
tom@office-unx-ws01:~$

Probably best to control^C straight out once you've seen what you need

so from the above you can see that disk 0 is: /dev/dsk/c5d0

so the first partition on that disk will make the drive c5d0p1, obviously yours won't be that unless you've got the same controllers as me. (Partition 2 would be c5d0p2 etc)

you can also use the FORMAT utility to look at the partition structure, disk info etc.

(I'm on OpenSolaris Nevada build 111)
 
Thanks guys, really helped me out.
I appear to have hit an interesting dilemma, do i ;

a.) forget ZFS and go for another file system making use of the 3Ware 9650SE's on board XOR processor.

b.) stick with ZFS and Raid-Z and presumably loose the hardware acceleration of the 9650SE.


Could someone confirm those are my choices>?
 
Yeah, those are pretty much your choices. There is at least one other option, buy another 5 disks, set up 5 RAID-0 arrays on the 3ware, and use them to create a raidz pool.

Hardware RAID wont give you all the features of a raidz pool, you need to read more about ZFS and decide if you want those features. You should also check on your controller whether write caching is enabled, if the cache is battery backed, especially when running in JBOD mode. Some controllers can't handle that..
 
Also, the benefits of hardware XOR engines over software aren't as big as you might think as the XOR calculation isn't usually the bottleneck with write speeds on RAID5. The main reason for slowness comes because to do a single write you have to do a bunch of reads first to work out parity. No amount of XOR magic is going to solve the fact that RAID5 doesn't perform well at writes, because it's not designed to.

I'd JBOD the lot and do it in software. Bear in mind that if your controller fails and you've got a hardware RAID, you'll need to find the same controller to be able to rescure the arry from the disks. If it's in software, then any solaris installation on any old hardware will be able to see the zpool on the disks.
 
cheers guys, much appreciated.
The advice seems sound, i assume that, if i went down a the possible route, MT- suggested, that is;
Raid 0 every pair of drives

This would double the parity space as ZFS would see every pair as a single drive. Not a huge consideration for large number of drives, but with5/6 drives its too much of a payoff for me.

Thanks again for the advice people
 
You wouldn't lose any higher proportion to parity with that setup than with 5 disks.
5 * 1 TB in raidz would give 4TB in storage, losing 20% to parity
5 * (2 * 1 TB in raid 0) would give 8TB in storage, losing 20% to parity

You are slightly more likely to suffer a double disk failure with the latter setup tho.

'Course, depends if your app needs 600MB/s read speed, or can live with just 400MB/s :D
 
Back
Top Bottom