Multiple Disk controllers + debian

Soldato
Joined
22 Aug 2005
Posts
8,968
Location
Clydebank
Hi all
having a bit of an issue here, or maybe what could be multiple issues, and wonder if anyone has come across this before.


Got a PC chips m848 board with a sempron and 512MB ram.

Also, 2 versions of the SIL0680 raid card (configured to use as an extra 4 IDE controllers) both PCI cards.

An adaptec Ultra2 LVD scsi PCI card.


I have 5x 160 + 1x 200 hanging off the 2 onboard, and the 2x SIL0680s giving me the basis for a 780GB RAID5

And the idea is to use the 2x 10GB SCSI drives as the system drives, in a RAID 1 formation.

No mater what I do I can only seem to get the bios message to appear for 2 of the cards at a time, even if all three are plugged in. The order depends on the order in the PC slots.

Anyway, that doesn't seem to be a problem, as with SIL-1 in slot 1, SIL-2 in slot 3 and SCSI in slot 5 it seems to boot, but misses out the SCSI bios message.


The debian installer detects all the drives. and allows me to configure them as I wish.

however the first time I did it Grub installed, to the first IDE disk, which then gave an Error 17 upon reboot probably due to a raid error?

I tried again, and this time It installed LILO, and I chose MD1 (which is boot on the SCSI) and I got the same grub error - ( how do I clean the mbr of a disk wiithout creating a win book disk and doing fdisk /mbr ?)

But my mobo bios will only let me boot from either on board or either SIL680. If I reorder the cards, then I lose a SIL680 but gain access to the SCSI.

One thing I don't know is maybe I can reorder , and get my mobo bios to see the SCSI, install bootloader on to that and hope linux will detect the other SIL680 card itself, like it's currently doing for the SCSI card.


1. How to clean the mbr of a disk without using fdisk /mbr ?
answer : dd if=/dev/zero of=/dev/hda/ bs=446 count=1 <-- will that work?

2. Anyone any tips, notes on configuring a RAID1 SCSI system disk, with a 6 disk EIDE raid 5 hanging off?

3 . LILO or grub, why did I get one one time, and the other the next? Which one should I go for? (tbh, I'd rather lilo...)


Am i crazy?

thanks
 
Last edited:
You can reinstall grub, lilo once you are inside the OS by invoking the grub console first.

I tend to use grub so typically you just tell it where your root partition is:

root (hd0,1) (for example, which is primary master second partition etc.. )

Then setup (hd0) (for example, to write to MBR of first HD).

MBR is exactly 512 bytes (well 446 as you say if you exclude partition table).

dd if=/dev/zero of=/dev/hda/ bs=512 count=1 .. 446 I think would leave your partition table intack. However, thats maybe what you want?

Can't help you with the RAID stuff, don't know a load about it unfortunately.
 
well. I've been at this since last night now. it's incredibly infuriating.

I have since switched to another card now, (a U160) and this now seems to be fine in the bios process, (i.e. every card goes through the stages , in the order of the PCI slot order) but no matter what I do here, I never can get the bios to boot from the SCSI disks, it's always the onboard and then 4 IDE disks and the CD. Hmmm

No bother, I'll put grub on the first IDE disk and tell it to boot the root FS on the SCSI drive.

This just won't work! debian installer says it's done it but I get error 17 upon reboot.

If I do it manually as you suggest, I can't get anywhere, grub console error loading stage2. using the rescue functions etc. Lilo doesn't seem to work either. I get L 01 01 01 01 etc I thought booting a RAID1 linux install was meant to be childs play.

I have now created a small partition on the first IDE disk (the 200giger) and placed /boot there. Hopefully this will work, but I really did want to have my whole system disk RAID1 but perhaps once It's running, I'll be able to adjust this..

godamn WHY DOESN'T IT JUST WORK
 
Yey.. Well as planned it now worked putting the /boot on the first IDE disk. So now my system is up. But I forgot to do expert install and now I have the 486 kernel , worth it to go back and choose the 686 one?
 
Actually just a case of literally apt-get install linux-image-686 and I now have the 686 kernel installed. AFAICS it only really matters if you have 1G or more ram. This box has only 512 meg just now.

Some output
Code:
datastore:/home# uname -a
Linux datastore 2.6.18-6-686 #1 SMP Fri Jun 6 22:22:11 UTC 2008 i686 GNU/Linux
datastore:/home# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/system_vol-root_lv
                      596M  130M  466M  22% /
tmpfs                 253M     0  253M   0% /lib/init/rw
udev                   10M  112K  9.9M   2% /dev
tmpfs                 253M     0  253M   0% /dev/shm
/dev/hda1              89M   14M   70M  17% /boot
/dev/md1              177M   13K  167M   1% /bootbak
/dev/mapper/system_vol-home_lv
                      920M  300K  920M   1% /home
/dev/md0              746G  1.5M  746G   1% /mnt/filestore
/dev/hda2              24G  544K   24G   1% /sysbak
/dev/mapper/system_vol-tmp_lv
                      263M   13K  249M   1% /tmp
/dev/mapper/system_vol-usr_lv
                      4.0G  217M  3.8G   6% /usr
/dev/mapper/system_vol-var_lv
                      2.0G  161M  1.9G   8% /var
datastore:/home#
 
Actually just a case of literally apt-get install linux-image-686 and I now have the 686 kernel installed. AFAICS it only really matters if you have 1G or more ram. This box has only 512 meg just now.

As you say just apt-get it. Nah nothing to do with RAM really. It is about CPU instruction sets and compile optimisations (3DNOW, SSE, MMX registers) so there's a larger CPU instruction set to use with 686. It basically depends on what settings the kernel was compiled with (for example memcpy family etc.. can use SIMD, so some optimisations can be made there). I doubt the Debian package maintainers use aggressive compiler optimisation when building the stock kernels else stuff tends to break with a negligible performance increase but switching from 386 to 686 will see a slight performance gain.

Its the 32bit/64bit bus size stuff which says how much physical memory you can address...
 
Last edited:
Thanks.
The extra performance if any probably isn't needed. I seem to get about 10 Meg read speeds on the Raid5, over a 100meg network this is fine, really.

Its got nothing to with disk I/O what I wrote basically means you can make faster utilisation of the processor when executing instructions. To the general population this means nothing apart from your machine is slightly quicker, hehe. Disk I/O is seriously slow in comparison to pretty much everything else - you got tons of stuff going on in the meantime. Disk controller interrupts CPU and your data is loaded into the RAM.

The 1G thing I got from here http://forums.debian.net/viewtopic.php?p=149918&sid=ed897ec635095d3e8a40024b52a5e7db no idea if that guy is right or not...

He's talking crap really.. a 32bit based kernel can address 4GB of physical memory (2^32 = 4294967296 = 4GB), this includes 386 kernels.
 
Back
Top Bottom