Acessing Windows Raid

Associate
Joined
18 Oct 2002
Posts
264
Location
Staffs Moorlands
Setup and tested Arch on a spare rig pretty happy with it. Now I would like to dual boot on my main rig with XP.

Before I do my main rig is setup with a pair of hard drives using the on board raid controller with Windows XP (needed for some boring work stuff). I will be installing Arch on to a separate hard drive.

Is there a way to access the files on the raid array or will I have to re install XP without the raid?
 
It was around 4 years ago now, I maintain it well. Still as quick as when it was installed

If I remember rightly I did it I set the drives up in the bios then the stripe, then Windows needed some drivers. Gigabyte DS3, q6600.
 
If it was done in the BIOS, it should be good to mount it as normal. What if you do an "ls -la /dev | grep sd" - do you see it?
 
I haven't done the install yet just trying to get an idea if I need to change it first to be honest.

I was thinking of dropping the XP raid install and then install Arch in software raid on the two drives and XP on the spare drive.
 
OK, the RAID you have on your motherboard is software RAID designed to work for windows by using a minimal controller to present a blockdevice and a driver to make your CPU do all the grunt work for the controller. This is often called FakeRAID or SATA RAID.

Linux just does its own software RAID onto the bare drives.

You can get Linux to access the FakeRAID array in the same way it accesses a normal Software RAID array using dmraid. But this is extremely risky, I'd advise doing so only in a dire emergency to recover data should your motherboard fail, or Intel's shoddy RAID software goes bonkers.
 
Thanks for the help.

OK, the RAID you have on your motherboard is software RAID designed to work for windows

That's what I thought to be honest.

Hopefully going to get it sorted later this week one way or another.
 
Never had a problem with dmraid to access software RAID from ICH5R, and ICH6/7. In fact, I put a running linux system on a dmraid'ed ICH5R in RAID-0 when the support first started :p
"dmraid -f isw -ay" should create a device node in /dev/mapper which you can mount.
 
I just don't trust ICH*R. I turned the PC on with a drive from my RAID0 array unplugged one day and as it turns out, there's no way to ever recover from that; array permanently marked offline.

I would now only ever use hardware RAID (including FakeRAID here) if there are dual controllers either with battery backup or no write cache. Otherwise use true software RAID. In other words, can you always replace your RAID controller? Or at least get the data out if one fails.


As for the performance? Your CPU is the most cost effective source of processing power, it has plenty of power to spare for a little I/O.
 
Last edited:
I just don't trust ICH*R. I turned the PC on with a drive from my RAID0 array unplugged one day and as it turns out, there's no way to ever recover from that; array permanently marked offline.

I would now only ever use hardware RAID (including FakeRAID here) if there are dual controllers either with battery backup or no write cache. Otherwise use true software RAID. In other words, can you always replace your RAID controller? Or at least get the data out if one fails.

In a perfect world I'd agree, however fakeraid is pretty reliable these days on Nix, I run fedora 15 on my main desktop dual booted with Windows 7, my OS drive is a pair of SSD's in RAID0 and my Data drive is a pair of Samsung F3s in RAID0, both are happily recognised in both Windows7 and Fedora with no issues at all (and decent performance in both).

And before you ask, yes I'm aware the RAID0 isn't exactly the most fault tolerant way to go, but my desktops all about speed, I run a home server (again Fedora 15) that has a 3Ware hardware RAID card for the data array thats RAID6, so thats my safe data location.
 
In a perfect world I'd agree, however fakeraid is pretty reliable these days on Nix, I run fedora 15 on my main desktop dual booted with Windows 7, my OS drive is a pair of SSD's in RAID0 and my Data drive is a pair of Samsung F3s in RAID0, both are happily recognised in both Windows7 and Fedora with no issues at all (and decent performance in both).

And before you ask, yes I'm aware the RAID0 isn't exactly the most fault tolerant way to go, but my desktops all about speed, I run a home server (again Fedora 15) that has a 3Ware hardware RAID card for the data array thats RAID6, so thats my safe data location.

whats the full spec of your home server?
 
In a perfect world I'd agree, however fakeraid is pretty reliable these days on Nix, I run fedora 15 on my main desktop dual booted with Windows 7, my OS drive is a pair of SSD's in RAID0 and my Data drive is a pair of Samsung F3s in RAID0, both are happily recognised in both Windows7 and Fedora with no issues at all (and decent performance in both).
Well that's not RAID, it's a high performance storage arrangement where the data therin has no value, I assume you're totally happy that it could all be lost tomorrow.

I run a home server (again Fedora 15) that has a 3Ware hardware RAID card for the data array thats RAID6, so thats my safe data location.
More like it, but it's not the way I'd go. If your RAID card fails tomorrow, you're either going to have to wait for the RMA (days) or go with something temporary and recover from backup or destripe the array. I'm not a fan of these RAID adapters, it's a shame the SAS ones don't have a dual controller option, since the drives all have two ports.

I bet your server has the spare CPU time for software RAID too, what does the card bring with it's cost? If the answer is the ASIC, would £150 (guessing) more on the CPU budget have brought more?
 
whats the full spec of your home server?

Its an AMD 955 BE CPU in a Gigabyte 880GM-UD2H Mboard, 8Gb RAM and a 3ware 9690SE RAID controller. Off the top of my head I think its got 8 1Tb drives in the raid array currently running RAID6 (the 3Ware card is a 16 port unit). I always run a seperate OS disk system which currently is just a bog standard 320 Gb hard drive (I find it much easier to keep OS and data completely seperate). The server also contains 2 x dual DVB-T tuners (one PCI, the other USB) as well as a DVB-S2 satelite tuner all controlled by MythTV.
 
Last edited:
Well that's not RAID, it's a high performance storage arrangement where the data therin has no value, I assume you're totally happy that it could all be lost tomorrow.

Of course its RAID, its just RAID0 for performance and yes I am happy that I could lose anything at any time, thats why we have the server, any data is synchronised to the servers VERY redundant array.


More like it, but it's not the way I'd go. If your RAID card fails tomorrow, you're either going to have to wait for the RMA (days) or go with something temporary and recover from backup or destripe the array. I'm not a fan of these RAID adapters, it's a shame the SAS ones don't have a dual controller option, since the drives all have two ports.

I bet your server has the spare CPU time for software RAID too, what does the card bring with it's cost? If the answer is the ASIC, would £150 (guessing) more on the CPU budget have brought more?

I don't agree with you here, from my experience getting software RAID setup and moved between installations etc is a much more chancy endeavour than having a true hardware RAID card that the OS just sees as a single volume.

Yes, if the RAID card goes down my choices are to wait for RMA or just to replace it, but from experience both personally and in work, proper RAID cards like 3Wares or LSI's are pretty bomb proof. It also means that any failures of drives, rebuilds hot swaps etc are dealt with extremely easily by the card itself and are totally transparent as far as the OS is concerned.

I like using Fedora as my OS of choice, yet I'm aware that as its fairly bleeding edge distro and components like dmraid could easily get broken by an update.

It also means I can muck around to my hearts content with the OS potentially breaking things, whilst being sure I can get to my data very rapidly, even just from a live CD.

To be honest, ask yourself why in proper server environments they ALWAYS use a proper hardware RAID card? It's because it just works......

E-I
 
Last edited:
I guess the other advantage of hardware RAID is that with the battery backup unit on the RAID card I'm pretty certain that even with a major power outage, I won't have any issues with my data. I'm not sure you'd be able to say the same running software RAID?
 
RAID 0 is not RAID, it's AID. :P

And "proper" server environments use a storage controller, usually with dual controllers and battery backup. These are what I develop for a living.

This beast for example:
systemsstoragediskstorw.jpg


RAID cards are the bottom of the low end when it comes to servers. As for desktops they're probably very high end, and I guess your reasons can be justifiable, especially the hot swap. But It's not for me, I can't trust anyone's single enterprise controller as much as you trust that card.
 
Last edited:
RAID 0 is not RAID, it's AID. :P

I'm not disagreeing with the RAID0 being more accurately called AID0 :-) as you say, theres no actual redundancy ;-)

I'm also aware that storage controllers are used at the higher end storage side of things, I guess my statement was a BIT of an oversimplification.... My point was that proper hardware based solutions tend to be more robust than a software based one.

I guess theres several ways to acheive the same final end result, in my work we tend to use large storage servers that run enterprise raid cards that are mirrored to an identical server, so each has multiple fault tolerance in power, disk systems and are UPS backed up, but our needs are just for significant amounts of storage and archiving with performance being less of an issue.

So anyway, my home setup is far from an enterprise environment and I work on the basis that the chances of both my desktop and server spitting their dummies out at the same time are very slim.

I guess my point to the OP is that in a dual boot environment linux software raid means that the data is only readable from within Linux, whereas if you can get the fakeraid working correctly then it's available to both....
 
Last edited:
What are your views on Solaris and ZFS? I am looking at building a home nas soon and I'm eyeing up solutions - I like the idea of ZFS as you can add disks of any size and it still has the redundancy.
 
If linux had good support of ZFS I'd use it today. I love the ability to have a time slider in every directory. It's essentially a software implementation of storage virtualization, another slightly less useful one is LVM. The SVC code which runs on the storage controller above is virtual, one of my windows hosts has a 256TB volume. :D
 
Last edited:
Back
Top Bottom