IBM Seagate Savvio 15K.2 146GB

Permabanned
Joined
2 Jul 2011
Posts
629
Hi Folks, hope someone can help.

I picked up some Seagate Savvio 15k.2 drives for a total bargain price.

They are all brand new etc etc, in OE packaging.... only problem being they have an IBM Label on, instead of a Seagate one.

Is this going to cause an issue?

The IBM Part number is 9FU066-039 but I cant find ANYthing on the internet about it.

I want to use these drives with an HP P410 Smart array, do you guys think IBM Have just slapped a sticker on, or do you think that they have customised the firmware?

Nothing in the product description suggested it was an IBM stickered drive, but for the price, I cannot complain cos they were sooooooooo cheap.

Any advice more than welcome!!

Cheers fellas.
 
It shouldn't matter at all. It's just been bought for an IBM server. As far as the controller is concerned it's just a SAS/SCSI compliant drive and it should work fine.
You'll find the same drives with HP stickers on and a different part number, it's just re-branded.
 
Thats fantastic news, thank you very much.

The only issue I have is with Warranty, as when on the Seagate website, I get the classic:-

The product you identified was sold as a system component. Please contact your place of purchase for service. Seagate sells many drives to direct OEM (Original Equipment Manufacturer) customers. These products are usually configured for the OEMs only, as components for their systems. You must contact your place of purchase for any warranty support on these drives.

Which leaves me a bit stuck, as IBM dont seem to recognise the part number either, which is dumb, since its got their damn sticker on it!!
 
Would have probably been ear-marked to go in a hot swap caddy and then have a different part number all together.
 
IBM won't warranty it because it's probably a warranty replacement itself.
I have a lot of HP ones lying about because when a drive fails I hold generic Seagate spares to swap in immediately, then when the replacement arrives it replaces the spare I used. These warranty spares don't hold their own warranty and are only covered by the warranty on the server they're used in. Hence it won't come up.
 
Well I tried them, and they work fine with HP Smart array cards.

Identified as IBM irritatingly when using the ACU offline CD, however, my GOD they are fast drives.

Just did a little test with a three drive RAID 5 array, and they are rapid.

Also, you cant even hear them, or feel any vibration. Makes me wonder why the Velociraptor is so damn noisy when these are so much better and faster.
 
Also, you cant even hear them, or feel any vibration. Makes me wonder why the Velociraptor is so damn noisy when these are so much better and faster.

Because the Velociraptor is cheaper and less well built - it's not an enterprise grade drive designed for 24/7 use for ~5 years
 
Funny you should say that but its in the Enterprise section on the WD website if I remember correctly!

But, I was expecting to at least hear the drives seek, or spin up to a slight whine.... NADA, nothing.

Whereas the raptors would exhibit lots of high volume clicking, and vibrate a hell of a lot more.

Im so impressed!

Cant wait to get them all in an array and start having some fun.

I just wish I had money for teh new ML110 G7's as their PCI-e buses are all Pci-e 2, rather than the G6 mixture of PCI-e Gen 1 and 2.

Gets important when dealing with Quad NICs and Direct attached storage thats FAST!
 
It's in the business section because it's 10k spindle speed, but it's still SATA not SAS and it's not nearly as well built as the proper job.

DAS is a bit tame, I've just done my first install of DL360 G7s, the drives in there are pretty nippy but pale into insignificance in comparison to the SAN it's connected to :)
An Equallogic PS4000VX, 14x 450GB 15k SAS RAID10. Once you start playing with SAN storage your definition of "fast" gets turned on it's head :)
 
Well, suffice to say im impressed again.

Did an install of Windows Storage Server 2008R2 and even with old firmware on the P410 RAID Array, HD Tune showed a MINIMUM sustained transfer rate of 1069MB/s.

Woohoo!

This is with 16 drives in a standard RAID 5 Array. Perhaps more performance can be gained with different RAID configurations, but hell, im happy with that!

Blows the Equallogic SANS we have at work into the dust!
 
That sounds like you're benchmarking tool isn't doing direct I/O. Those speeds sounld like you're writing mostly to memory on the controller.

If it beats your SANs for the same test criteria...something is set up wrong. The PS400VX I mentioned earlier comes in over 3000MB/s with direct I/O turned off.
I can guarantee you the controller in them is better than the P410i :)
 
That sounds like you're benchmarking tool isn't doing direct I/O. Those speeds sounld like you're writing mostly to memory on the controller.

If it beats your SANs for the same test criteria...something is set up wrong. The PS400VX I mentioned earlier comes in over 3000MB/s with direct I/O turned off.
I can guarantee you the controller in them is better than the P410i :)

I thought that was the point of HDTune HD Tach..?

They do GB after GB of transfer to saturate the cache and reveal the true IO of the drives.

I dont know what a PS4000VX is, but I refer to our Iscsi SANS, which would never ever see 3000Mb/s, as a) they are iscsi (4 ports I think) and b) only have 16 disks in each of them also.

I think our SANS are PS4000E.

Anyway... the performance I get, correlates almost exactly with the performance listed in an Lsi 620J enclosure document, which shows a 2426MB/s transfer rate with 24 spindles with a 256kb request size.

Anyway.. point is, im happy and its rapid.

Now ive got an NC365T Quad NIC that I will team to give me a 4GB/s connection to the outside world to present iSCSI LUN's on.

Unless you guys can think of a better way of doing it?!!

Its all experimental anyway, this is me building a mega home test lab!

Sounds like you know your SANS though Skidilliplop.
 
HD tach etc don't bypass cache etc, especially as the disks are fast and can empty it quite quickly anyway. But you are correct in that they will emulate what applications will do. ATTO lets you force read/write direct to disk, which is handy to know because if a cache battery dies in Equallogic arrays they fail into write through mode until the charge is back within tolerance. They also do this if one of the controllers fails (by default, it can be set not to do that).

With SANs it's always hard to properly bench them because the volumes don't physcially exist in any set place. The PS4000VX is the 15k SAS version of the ones you have, which are 7.2k SATA. Same 2x Gigabit setup though. The reason mine show up 3000+ when the NICs could never achieve that is because of caching. Software initiators use system RAM to cache outgoing writes (or at least M$ initiator does) so most of the readings you get are just writing to RAM of some sort. Be it on the SAN/controller or System memory.
Also one disk tray strictly speaking is not a SAN. A SAN implies the use of multiple units, in this scenario you can have volumes spread across 100s of disks and hundreds of controller caches. Which is where it stops becoming essentailly a network attached DAS enclosure and a proper storage network.

Also a tip with the NICs. If you're presenting iSCSI to other devices (or indeed connecting to an iSCSI target too) you shouldn't team NICs. The should have their own IP in their own right and use MPIO to load balance across them. Rather than one fat pipe this gives you several parallel pipes, which the majority of storage needs prefers and will give better performance. Especially the likes of SQL/Exchange which needs lots of concurrent I/O. It then becomes less about throughput and more about IOPs.
 
Last edited:
Im connecting ESX 4.1 via Iscsi to LUNS hosted on Storage Server 2008 R2.

I didnt think that ESX supported true MPIO?

Isnt it just load balancing using hashing?
 
Im connecting ESX 4.1 via Iscsi to LUNS hosted on Storage Server 2008 R2.

I didnt think that ESX supported true MPIO?

Isnt it just load balancing using hashing?

Nope, that's what NIC Teaming does pretty much. MPIO is more clever than that, you can set it using different criteria, some of which use hashing to load balance (weighted paths I think does this) but most of the time you'd use "least queue depth" which puts the I/O through the NIC with the least outstanding I/O which load balances thigns a lot more effectively and reduces your write latency, something stuff like SQL and exchange really like is low write latency. I'm pretty sure ESXi 4.1 Supports MPIO over iSCSI, iirc 3.5 didn't but I'm sure it does now.
 
Well I would want a round robin form of MPIO.

Can that be done with ESXi 4.1 ?

I want the full 4gbs offered by bonding my quad port NIC.

Seems to work well with HP teaming software in windows, but ive no clue on the ESXi side of things.... Its probably got to be MPIO configured, as the ESXi teaming is a bit rubbish!

Cheers.
 
Last edited:
Depends what you're running on your VMs, but usually if you'll have multiple machines accessing volumes through iSCSI your best off with least queue depth as your balancing method as that works best for concurrent access. Round robin only tends to work better if you have a single LUN connected using multiple NICs and are lumping large files back and fourth. Otherwise it's IOPS you want more than raw sustained throughput.
I'm sure ESXi will support round robin at least, and probably weighted paths as well as these are the common ones. But it's worth looking up if it supports least queue depth too.

Edit: looked up on ESXi tech guides. It does support Native Multipathing (NMP) but the path selection policies are round robin, last used path or fixed. I guess if you want more clever path selection you'd need to do it within a hardware HBA firmware or with some 3rd party VMWare plugin.
 
Last edited:
Well I would want a round robin form of MPIO.

Can that be done with ESXi 4.1 ?

I want the full 4gbs offered by bonding my quad port NIC.

Seems to work well with HP teaming software in windows, but ive no clue on the ESXi side of things.... Its probably got to be MPIO configured, as the ESXi teaming is a bit rubbish!

Cheers.

yes it does round robin and can fall back to redunadant paths etc

Nice article to get you going here.
http://virtualgeek.typepad.com/virt...-post-on-using-iscsi-with-vmware-vsphere.html

you will probably find you have to tweak IO operation limit for your enviroment to get the best performance.

You may find that if you bond all the Nics and then run iSCSI down the link you only get 1 NIC worth of bandwidth due to not being able to split the session over more than 1 link
 
Ive sorted it..

Got MPIO enabled with IOPS set to 1 on the ESXi boxes.

Windows Storage Server 2008 R2 is presenting the Disk array with four LUNS through an NC365T Quad Intel NIC, which has Switch assisted load balancing enabled.

I see a nice even spread across all four NICs on the San, and the SAN can max out each 2Gb/s dedicated iSCSI connection on each EXSi box.

The only thing is.. does anyone know some settings for IOmeter to get the best read readings to really stress the network card?

I know that the disk array is capable of a sustained read of WELL over 4Gb/s and I would like to see if I can max out the Quad NIC with one or two virtual machines.

The array is set to a 128kb stripe and in IOMeter on two virtual machines I set three worker threads each with 100% read, no randomness and 128kb reads and an outstanding IO's of 32 for each worker.

However, its all pretty much guess work at the moment and I have seen 2.9Gb/s so far.

Anyone know some tips to generate huge read speeds with IOmeter so I can try and max out the 4Gb/s team?

Cheers. Sorry for the long and boring post.
 
Back
Top Bottom