IBM Seagate Savvio 15K.2 146GB

Permabanned
Joined
2 Jul 2011
Posts
629
Hi Folks, hope someone can help.

I picked up some Seagate Savvio 15k.2 drives for a total bargain price.

They are all brand new etc etc, in OE packaging.... only problem being they have an IBM Label on, instead of a Seagate one.

Is this going to cause an issue?

The IBM Part number is 9FU066-039 but I cant find ANYthing on the internet about it.

I want to use these drives with an HP P410 Smart array, do you guys think IBM Have just slapped a sticker on, or do you think that they have customised the firmware?

Nothing in the product description suggested it was an IBM stickered drive, but for the price, I cannot complain cos they were sooooooooo cheap.

Any advice more than welcome!!

Cheers fellas.
 
Thats fantastic news, thank you very much.

The only issue I have is with Warranty, as when on the Seagate website, I get the classic:-

The product you identified was sold as a system component. Please contact your place of purchase for service. Seagate sells many drives to direct OEM (Original Equipment Manufacturer) customers. These products are usually configured for the OEMs only, as components for their systems. You must contact your place of purchase for any warranty support on these drives.

Which leaves me a bit stuck, as IBM dont seem to recognise the part number either, which is dumb, since its got their damn sticker on it!!
 
Well I tried them, and they work fine with HP Smart array cards.

Identified as IBM irritatingly when using the ACU offline CD, however, my GOD they are fast drives.

Just did a little test with a three drive RAID 5 array, and they are rapid.

Also, you cant even hear them, or feel any vibration. Makes me wonder why the Velociraptor is so damn noisy when these are so much better and faster.
 
Funny you should say that but its in the Enterprise section on the WD website if I remember correctly!

But, I was expecting to at least hear the drives seek, or spin up to a slight whine.... NADA, nothing.

Whereas the raptors would exhibit lots of high volume clicking, and vibrate a hell of a lot more.

Im so impressed!

Cant wait to get them all in an array and start having some fun.

I just wish I had money for teh new ML110 G7's as their PCI-e buses are all Pci-e 2, rather than the G6 mixture of PCI-e Gen 1 and 2.

Gets important when dealing with Quad NICs and Direct attached storage thats FAST!
 
Well, suffice to say im impressed again.

Did an install of Windows Storage Server 2008R2 and even with old firmware on the P410 RAID Array, HD Tune showed a MINIMUM sustained transfer rate of 1069MB/s.

Woohoo!

This is with 16 drives in a standard RAID 5 Array. Perhaps more performance can be gained with different RAID configurations, but hell, im happy with that!

Blows the Equallogic SANS we have at work into the dust!
 
That sounds like you're benchmarking tool isn't doing direct I/O. Those speeds sounld like you're writing mostly to memory on the controller.

If it beats your SANs for the same test criteria...something is set up wrong. The PS400VX I mentioned earlier comes in over 3000MB/s with direct I/O turned off.
I can guarantee you the controller in them is better than the P410i :)

I thought that was the point of HDTune HD Tach..?

They do GB after GB of transfer to saturate the cache and reveal the true IO of the drives.

I dont know what a PS4000VX is, but I refer to our Iscsi SANS, which would never ever see 3000Mb/s, as a) they are iscsi (4 ports I think) and b) only have 16 disks in each of them also.

I think our SANS are PS4000E.

Anyway... the performance I get, correlates almost exactly with the performance listed in an Lsi 620J enclosure document, which shows a 2426MB/s transfer rate with 24 spindles with a 256kb request size.

Anyway.. point is, im happy and its rapid.

Now ive got an NC365T Quad NIC that I will team to give me a 4GB/s connection to the outside world to present iSCSI LUN's on.

Unless you guys can think of a better way of doing it?!!

Its all experimental anyway, this is me building a mega home test lab!

Sounds like you know your SANS though Skidilliplop.
 
Im connecting ESX 4.1 via Iscsi to LUNS hosted on Storage Server 2008 R2.

I didnt think that ESX supported true MPIO?

Isnt it just load balancing using hashing?
 
Well I would want a round robin form of MPIO.

Can that be done with ESXi 4.1 ?

I want the full 4gbs offered by bonding my quad port NIC.

Seems to work well with HP teaming software in windows, but ive no clue on the ESXi side of things.... Its probably got to be MPIO configured, as the ESXi teaming is a bit rubbish!

Cheers.
 
Last edited:
Ive sorted it..

Got MPIO enabled with IOPS set to 1 on the ESXi boxes.

Windows Storage Server 2008 R2 is presenting the Disk array with four LUNS through an NC365T Quad Intel NIC, which has Switch assisted load balancing enabled.

I see a nice even spread across all four NICs on the San, and the SAN can max out each 2Gb/s dedicated iSCSI connection on each EXSi box.

The only thing is.. does anyone know some settings for IOmeter to get the best read readings to really stress the network card?

I know that the disk array is capable of a sustained read of WELL over 4Gb/s and I would like to see if I can max out the Quad NIC with one or two virtual machines.

The array is set to a 128kb stripe and in IOMeter on two virtual machines I set three worker threads each with 100% read, no randomness and 128kb reads and an outstanding IO's of 32 for each worker.

However, its all pretty much guess work at the moment and I have seen 2.9Gb/s so far.

Anyone know some tips to generate huge read speeds with IOmeter so I can try and max out the 4Gb/s team?

Cheers. Sorry for the long and boring post.
 
iperf will max out your network.

Ah had a quick look, I want to use the SAN to max out the network, hence using IOMeter to do it.

Trouble is... not many people would tailor the settings for the best case read scenario which is what I want.

I bet in the vast majority of cases people are tailoring IOMeter for a more real life workload.

I just want to max out the read capability, be it burst, sustained or whatever!

4gb/s is only 512MB/s so the SAN can easily saturate the link, I just need advice on how to get IOMeter to do it!

I think I may also Migrate the SAN stripe size, from 128K down to 64K.

All the Equallogics seem to be 64K and it does them no harm.
 
Have you tried messing with the iops values as 1 is not necessarily the best value due to path switching overheads.

Good point and I have seen a lot around the net on this.

Lefthand themselves recommend changing to 1, and this is the guide I followed to enable MPIO via the CLI.

Im definitely open to other suggestions, but going from 1000 to 1 really made a huge difference.
 
Back
Top Bottom