2nd X25-M for RAID0 or a 128gb C300?

Soldato
Joined
26 Jan 2005
Posts
5,141
Location
In........cognito
What do you think is the best option? It's obviously cheaper to get the second X25-M (unless I sell the one I have) but I have an SATA3 board and it's itching to be used! :p
 
depends on what you want to do with it.
to be honest i doubt you will see much difference between raided SSD's and a single SSD's given the throughput, seek time and board bandwidth...
 
i doubt you will see much difference between raided SSD's and a single SSD's given the throughput, seek time and board bandwidth...

what between raided SATAII and a single SATAIII?

can anyone point me at some read/write figures for 80gb X25-M in RAID0?
 
You legend! Thanks bud. :) I see they're on TWO as well.

What happens to 440 MB/s on SATAII drives though, is it wasted?
 
It's only the read speed so I don't think it's wasted. Me, i've used 2 x Vertex 2 60GB on my main RIG and used a C300 128 on my son's Q9550 and Intel SSD's on my wife and daughter's lappy. Smaller drives in Raid O is way a lot faster specially when moving files compared to single large drives.

Oh! Don't forget to update the firmware and secure erase your older SSD before building up the raid array to make sure they both start with a fresh nand and on the same firmware.
 
Last edited:
Oh! Don't forget to update the firmware and secure erase your older SSD before building up the raid array to make sure they both start with a fresh nand and on the same firmware.

That's good advice, cheers. I think the FW on the current drive is up to date but I'll check.

Just seen they can be had for 125 sheets as well. :)
 
You legend! Thanks bud. :) I see they're on TWO as well.

What happens to 440 MB/s on SATAII drives though, is it wasted?

No, That bench was on a SATAII board. The workload is split between the two drives in your RAID array. The SATAII connection gives 3Gb/s headroom for each link between a drive and the SATA controller, it's not shared.

Like I said in the benchmark post, going for a larger stripe size, like 32Kb or the default 128Kb will boost sequentials reads from 440MB/s to around 500MB/s. The trade off is that files smaller than the stripe size won't be spread across both drives, so there is no speed increase on them.
 
OK, will do.

I've been reading up on stripe sizes for SSDs and it would seem that the general consensus is that larger is better, some even go as high as 1mb. 128kb is the default for 2 drives because the erase blocks in SSDs are generally 64kb. Anyone have any other thoughts on that?
 
OK, will do.

I've been reading up on stripe sizes for SSDs and it would seem that the general consensus is that larger is better, some even go as high as 1mb. 128kb is the default for 2 drives because the erase blocks in SSDs are generally 64kb. Anyone have any other thoughts on that?

irst2.jpg


Intel themselves recommend a 16KB stripe if you are using an SSD, and I trust that they know what they are talking about.
 
Hmm, interesting. Here's something I lifted from another forum;

The main reason for using 128 KB instead of 64/32/16 or someting else is that SSDs has to erase the data on the entire block to write even a small 4 KB file. And guess the block size? 128 KB. If you use smaller stripe, the SSD will anyway rewrite the entire block, might as well use the "native" size for the SSDs, in fact it's more the NAND flash memory block size used in SSDs, it's what seems to be the faster size because of this.

And from another one;

While it is true that bigger stripe sizes are better from the RAID controllers perspective since it mean less computation for the same amount of data moved there is a problem with bigger sizes as well. In a desktop the IO sizes are mostly 4k, 8k, 16k. With sizes like this you can end up with a bunch of IOs all going to the same stripe, because they are all accessing the same file (this is very common in windows). This means one of the SSDs are doing all the work and the other one doing nothing, loosing you any benefit of RAID0. Form a drive utilization perspective, smaller is better.

This mean it is a trade-off situation. Bigger is better for RAID controller overhead but smaller is better for drive utilization. So what it really depends on is the sizes of your IOs and the particulars of the RAID controller. That sai,d the 64k and 128k sizes suggested here aren't bad choices. I suspect that 64k might be pretty optimal for gaming, for example.

Anyone know what the block size of the X25-M is?
 
Last edited:
The new drive just turned up and it has 02HD on the outside for the firmware, the current drive has 2CV102HD in the toolbox. Is it safe to assume that these are the same and up to date, or do I need to update them both?
 
Smashing, thanks bud. :)

I was just thinking about how to erase the current drive, surely that will happen when I create the array won't it? I've checked the performance of it and it's right on the 250/70 that it should be.

If I have to erase it first, can I do it with a DOS tool as I intend to restore from an image?
 
Use HDD erase 3.3 as that is recommended by Intel. Just put it into a USB stick and boot from it.

Here's how I do it.

- Detached all attached SATA drives (Sata connectors only not the power connector)
- Change bios storage configuration to IDE and compatible mode
- Boot from the USB stick
- As soon as you get to the DOS environment, connect the drive that you want to erase.
- Type HDD erase
- once done (5 - to 15 seconds) power off the system.
- Connect the other SSD. Make sure you connect them both on 0 and 1 or first and second sata II mobo connectors. (Do not connect the other hard drives yet.)
- Boot to bios and change storage mode to Raid and press F10
- as soon as the bios reboot, press CTRL+F1(Check your mobo manual)
- Build your array. For Intel SSD, 16KB is the recommended stripe.
- Once done, connect your image drive to the mobo and insert your recovery disk and change the boot order to boot from from the disk.

Note:
1)It is recommended to do a fresh install of W7 on a freshly erased/formatted raid O array to aviod moving old log files, temp files and redundant files.(Unless you've been doing it regularly and your familiar with the SSD image consolidation and optimization procedure.)
2) Do not connect any other HDD while installing W7.
3) Install the Intel IRST driver (third one) and disable then re-enable write caching on the device.

Enjoy your system.
 
Last edited:
Thanks for the detailed info, it's been a big help.

Having a load of trouble with Acronis at the mo though. :rolleyes:
 
Back
Top Bottom