Little RAID (mirror) and file server help....

Soldato
Joined
18 Dec 2004
Posts
6,567
Location
London/Kent
Hi guys,

Currently enjoying (well, certainly more than my attempt a few years ago) my little linux adventure at the moment but have reached a stalemate.

I'm running CentOS 5.2 (updated) and using the Gnome desktop. I've managed to set up my samba share so that it is working - have a few questions to follow.

The setup
PIII-S 1.4ghz Tualatin
512MB PC133 SDRAM
VIA PLE133
VIA VT6421A SATA controller
2x 80GB SATA Hitachi drives connected to the above controller (would like ext3 or similar FS)
20GB laptop IDE (O/S - ext3)

The questions/issues

1. Right, I have disabled all the firewalls i.e. firewall and SELinux (it's a LAN only device for the moment). It has access to the internet but I generally avoid doing so. Is this OK?

2. I've had some serious problems with the SATA drives - the system sometimes freezes on the boot phase when the LVM is set up in a certain way in terms of the SATA disks. Really odd. Would only be fixed by unplugging one of the SATA drives. Anyone able to shed any light on this?

3. Want to setup a software RAID1, however for some reason, LVM is asking for 3 disks. I've looked at some howto's but they are totally useless to me as I don't understand what the hell is going on. I like the LVM GUI but I can't use LVM as it wants 3 disks. What do I do? I've currently setup two volume groups called NAS1 and NAS2, each mounted to /filestorage1 and /filestorage 2.

If I can't setup a RAID1 simply (given that I am a n00b), is there any way I can make them sync to each other, without implementing RAID at the filesystem sort of level?

I would like to say I like the way linux handles networks i.e. they just work. No faffing about with windows networked drives not always appearing etc.

4. In the Samba config, I set the guest user as 'root'. I did this because I want everyone to have full access to the shares i.e. full permissions. Are there any dangers to the server itself in doing so?
 
Last edited:
1. Right, I have disabled all the firewalls i.e. firewall and SELinux (it's a LAN only device for the moment). It has access to the internet but I generally avoid doing so. Is this OK?
So long as it's behind a NAT router you'll be fine.

smids said:
two volume groups called NAS1 and NAS2, each mounted to /filestorage1 and /filestorage 2.
Wait, you have a total of two disks in what you want to be a RAID1 array, right? Why do you have two groups? LVM should mash both disks together and take care of the rest. It should not present you with two different groups with different mount points. ...Unless you're talking about more than 2 disks here.

smids said:
I would like to say I like the way linux handles networks i.e. they just work. No faffing about with windows networked drives not always appearing etc.
You don't even know the half of it. You can do some amazing things. :D

Somebody else will have to help with the rest. :p
 
If you cant get RAID 1 the way you like it in LVM, you could set up the RAID 1 in software RAID, then put LVM on the resulting device?

If you want to use any of the other LVM features that is.
 
I am indeed behind a NAT :). I'm a linux n00b but I have some networking knowledge :D.

Indeed, I have 2 disks, both 80GB hitachi's for the moment. I can mash the disks together with LVM but cannot RAID mirror them because as soon as I select it, the damn thing says you need 3 physical volumes! Why?! I don't want a damn spare, I just want to mirror the two disks!

It is because I could not set up a RAID mirror in LVM that I chose to create two separate logical volumes, with the hope that some other process could sync them.

BigglesPiP - how do I do that? The only time I've remotely seens something like you suggest, it was in Disk Druid during the installation. I wanted to sort out the OS install first, so left the RAID array until after the install with the hope that something like Disk Druid would be available after - how wrong I was!

I would love to know how to setup a software RAID in CentOS. I don't need this for any other reason other than simply to insure against a disk failure.
 
Software RAID outside LVM would be set up during installation, yes.

I have an IBM X3250 set up with Software RAID 1 across its 2 160GB drives. That's running Debian etch.
 
If you type 'df -h', what do you get?

I'm assuming something along the lines of:

Filesystem Size Used Avail Use% Mounted on
/dev/hda1 19.1G 634M 3.8G 15% / (20GB laptop drive)
tmpfs 126M 0 126M 0% /dev/shm
/dev/sda2 80G 0 80G 0% /filestorage1 (80GB Hitachi)
/dev/sda1 80G 0 80G 0% /filestorage2 (80GB Hitachi)

Yes? You are quite right that it is MUCH easier to build such an array during graphical installation. I don't know exactly what you're doing to get the problems that you are because graphical systems are far harder to describe on these boards. I'll try and give you the commands to do it from the command line and then if there are errors we can all get a better idea of what's going on. Put the data on these two disks elsewhere for this process - if not possible I can take you through a vastly more complicated route where you create a small array using parts of the disk and then slowly transfer data and resize as you go. Anyway....

mdadm --create --verbose /dev/md1 --level=1 \ --raid-devices=2 /dev/sda1 /dev/sda2

You will need to change the device addresses to suit your needs. You can add further devices to the array later or remove one of the devices if they fail.

All good so far?

You then just need to formate the array that you've created - mkfs.ext3 /dev/md1

If this works then you just need to edit fstab and anything else that makes calls to the old arrangement and you're all set.

I'm sure if I've made any errors that people will let me/you know very quickly. Good luck :)
 
After much fiddling, I got myself to this point (believe me, I've spent 30 minutes playing with command line options to get here!).

Code:
[root@nas ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/hdc2              18G  3.2G   14G  19% /
/dev/hdc1              99M   16M   78M  18% /boot
tmpfs                 248M     0  248M   0% /dev/shm
/dev/sda1              76G  180M   72G   1% /filestorage
/dev/sdb1              76G  180M   72G   1% /filestorage
[root@nas ~]# mdadm --create --verbose /dev/md1 --level=1 \ --raid-devices=2 /dev/sda1 /dev/sdb1
mdadm: no raid-disks specified.
[root@nas ~]# mdadm --create --verbose /dev/md1 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1
mdadm: Cannot open /dev/sda1: Device or resource busy
mdadm: Cannot open /dev/sdb1: Device or resource busy
mdadm: create aborted
Any ideas?

EDIT: Some progress.

Apparently there is now a RAID array setup (after reboots) and I seem to have halted the problems with the system crash with kernel option 'nodmraid'. I cannot, however, format the filesystem.

Code:
[root@nas ~]# mdadm --create --verbose /dev/md1 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1
mdadm: /dev/sda1 appears to contain an ext2fs file system
    size=80413324K  mtime=Mon Feb  2 07:26:38 2009
mdadm: /dev/sda1 appears to be part of a raid array:
    level=raid1 devices=2 ctime=Mon Feb  2 08:03:49 2009
mdadm: /dev/sdb1 appears to contain an ext2fs file system
    size=80413324K  mtime=Mon Feb  2 07:26:38 2009
mdadm: /dev/sdb1 appears to be part of a raid array:
    level=raid1 devices=2 ctime=Mon Feb  2 08:03:49 2009
mdadm: size set to 80413248K
Continue creating array? n
mdadm: create aborted.
[root@nas ~]# fdisk /dev/md1

Unable to read /dev/md1
[root@nas ~]# mount /dev/md1 /filestorage
mount: you must specify the filesystem type
[root@nas ~]# mkfs.ext3 /dev/md1
mke2fs 1.39 (29-May-2006)
mkfs.ext3: Device size reported to be zero.  Invalid partition specified, or
        partition table wasn't reread after running fdisk, due to
        a modified partition being busy and in use.  You may need to reboot
        to re-read your partition table.
 
Last edited:
Eeeek, sorry - I should have said, you need to unmount the drives before building your array. It's easiest if you unmount and remove entry from fstab at the same time.

It will flag up warnings about them appearing to have file systems on but you should just be able to plough straight on and then format once you've turned it into an array. Look in fstab and see what it says about there being an array in there, delete whatever, if anything, is there pertaining to those two disks. Continue making the array if it throws up there being an array or filesystem (as the former wont be there after a reboot and the later you will deal with when formatting the array).
 
Last edited:
Is there any reason why the system hangs on creation of the md0 array?

mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1

I just ran the above to see if rebuilding the array would allow me to fix things but, as before, this simply just crashed the system. Could this be the controller?
 
Several possibilities, not likely to be controller - it's onboard isn't it?

What was in fstab about the drives?

When you say crash, what happens? Does it hang or restart - if it restarts it's almost certainly the controller
 
It's not onboard, it's a PCI card (but yes, it might as well be onboard as it is certainly not a hardware RAID card - it's a VIA SATA software RAID card).

Nothing is in fstab about the drives, it only lists my OS drive.

When it crashes, it hangs. No reboots or anything. Full lockup - mouse stops, everything. This sometimes happens on boot when the LVM has done something to the two drives. It is not the RAM, as this has been tested for 24hrs using memtest.
 
Hmm, sounds like you should try fdisking the two drives before creating the array. I thought it would be able to handle not doing so (I've done it before) but your hardware may not like it.

Just check that there's nothing in your md setup (cat /proc/mdstat) then fdisk the two drives to being RAID partitions before you create the array.
 
Would it just be easier to reinstall do you think?

It's things like this that turn people away. It should be simple if the kernel sees the hardware, however for some reason, the OS doesn't want to play ball... I'm going to persist as I liked it before I tried to set up a RAID. It works well when it does - it's just damn hard to get it there!
 
Just incase you need help with the commands for partitioning (computer responses in block brackets):

fdisk /dev/sda1
[Command (m for help) :] t
[Partition number (1-5)] 1
[Partition ID (L to list options):] fd
[Command (m for help) :] w

then do then do the same for /dev/sda2

edit: Thought I'd explain what this all means.
- The 't' command means that you want to change the current filesystem
- Partition numbers are 1-4 then 5-16. 1-4 are the primary partitions and beyond that are the extended partitions (you can have 4 primary partitions on any one drive and a maximum of 16 in total.
-Partition ID : fd is the Linux RAID auto detect file system
The final command 'w' is to write these to the disk
 
Last edited:
How far is it getting you now? I know these things can be frustrating but there is always an answer and getting there means you learn more in the process. Any problems and the forums are here to help.
 
Nowhere to be exact. It said that no partitions were defined, even though they were. I cannot get the system to recognise partitions, despite formatting, trying to use LVM etc. Nothing can seem to make this array work!

I think I'll have to try a different distro to see if that kernel works better with my hardware.

I'm trying a new ubuntu install to see if that works. Pity, I really like CentOS.

If ubuntu doesn't work, I'll try a new CentOS install and this time do everything through Disk Druid! :D

Thanks for your help though Nefarious, you've been great. I've gained a better understanding from some of the commands you listed. Had to do some reading also which, given your pointers, have helped me delve a little deeper into understanding the system.

Got a fair amount of time today given that I can't get to work!
 
Last edited:
I will say one thing about the network security that you've got - NAT isn't enough. I had a linux box and didn't bother overly with security but didn't allow samba and had a complex password. Woke up one morning to find that it had been accessed via SSH and all the passwords changed - was a nightmare to fix. I was behind NAT, firewall AND had a dynamic I.P. address. Set your SSH port (if you run an SSH server) to something other than 22, it stops the automated scripts that packet kiddies seem to use from finding you so easily.

Don't give root permissions over samba - it's just a question of setting your directory masks properly. A code of 777 gives full permission to all users and groups. A quick chmod and/or chown of anything else to the relevant code is a lot easier than having to completely reinstall a system thats been rootkitted.
 
Back
Top Bottom