Sata against scsi

Associate
Joined
8 Nov 2002
Posts
761
Location
Parson Drove near Wisbech
How much faster in times of say seconds would scsi be better loading os times and program run times over the Sata and ide @ 7200..and scsi @ 10000. Any help appreciated please and thank you..
 
It's not really possible to put a quantatative figure on that. Any 10K rpm drive will have a lower access time than a 7200rpm one but on the other hand a modern 7200rpm SATA drive will have a much higher sustained transfer rate than a 10k rpm SCSI disk (10k is old hat as far as SCSI/SAS is concerned).

With SSDs at the stage they are I would say that SCSI/SAS isn't justifiable in a desktop environment.
 
It's not really possible to put a quantatative figure on that. Any 10K rpm drive will have a lower access time than a 7200rpm one but on the other hand a modern 7200rpm SATA drive will have a much higher sustained transfer rate than a 10k rpm SCSI disk (10k is old hat as far as SCSI/SAS is concerned).

With SSDs at the stage they are I would say that SCSI/SAS isn't justifiable in a desktop environment.

Thanks for your input I may go for the Sata again but this time buy newest upto date drive instead of seconhand lol...
 
You may want to check out this thread for a healthy discussion about this.

rpstewart, I don't really know where you got your figures from, but they're totally wrong - sorry to say.

A 10K scsi drive will thrash any 7200rpm sata drive hands down, and to put things in perspective, a 15K scsi or SAS drive will pretty much leave everything standing.

You forget that SCSI is designed to handle higher sustained throughput on the I/O sphere than a sata as although the theoretical maximum is 3GB/s the true speeds are much lower, in the hundreds of MB/s.

Wnm0, it really depends on what you're looking to achieve with the drive, not forgetting that SCSI requires dedicated controllers mostly all running on a 64bit PCI-X 133bus.

SAS is the serial evolution of SCSI, but as yet the PCI-e bandwidth for the controllers is still maxed at x8.

Whatever you do, don't rush out and buy a 7200rpm sata if you want speed. The high capacity 7200's get round the speed issue by using large NCQ caches to achieve the data interaction with the controller.

The best current compromise for speed and cost at the moment is the Velociraptor who's 10K speed is vey nearly on par with 15K scsi/SAS devices in sustained I/O

Also, yes a lot of fuss has been made about SSD's but unless you're looking to buy
MTRON msd-sata3035-64's you'll be dissapointed as most SSD's say from corsair, intel and OCZ all have a high read speed but slower write speeds, which is why they boot quickly but ultimately suffer on intensive tasks.

The MTRON units have brutal Read and write speeds but they come at a price you don't even want to think about at the moment.

IMO, I believe that SSD's have still a couple of years ago before reaching maturity.

What I want to know is why you think that something spinning at 10 or 15K can be slower than something that spins at 7200K? Also, the Seagate Cheetah K6 and SAS units like the Veociraptors are on 2.5" platters rather than the more usual 3.5" platters of large capacity 7200's which in iteself means that the r/w head has less track area which in turn means quicker data transfer.

Like I say, it very much depends on what you're trying to achieve, once we know that, we can look at various storage options for you based around a budget range say low / med / high.

I can tell you straight away that your mobo won't have the capacity to run an Ultra320 scsi controller card, and there's not much point settling for an Ultra2 card as there's an associated performance hit.
 
rpstewart, I don't really know where you got your figures from, but they're totally wrong - sorry to say.

A 10K scsi drive will thrash any 7200rpm sata drive hands down
In access times terms yes, the SCSI will be lower but a three/four year old 10Krpm SCSI disk will have a pretty poor data density which leave it trailing behind a modern 7200rpm SATA drive in the same way as the original Raptors were outpaced by the first of the WD AAKS drives - check the benchmark thread for details.
 
but that's down to the controller.

Three or four years ago you'd be looking at an ultra2 unit, which was my point about the controller standards.

A 10K ultra 320 unit is still quicker that a 7200 rpm sata.

I've had a look through all the hard drives forum, but can't find the benchmarking thread.

I agree about the age of the equipment, and the fact that it'll be a consumer motherboard will always restrict choices in that repect.

We had a one of our guys talked into buying a cluster node of 50 7200rpm sata drives in our servers. They were to put it mildly - s**te. The access times went through the floor, and so we sent the whole lot back and got 15K SAS units.

Like I said, it all depends on what he actually wants to use them for. If the initial question is the one to be answered, then I think his best compromise is a Velociraptor, or a 7200 with 32mb+ NCQ cache capability, unless of course he wants to start on controller cards with bug memory on them, but then the cost gets silly in relation to what the original intent was.

My thoughts are that unless your function is something that swaps data at a very agressive rate, then the differences will be negligible, over and above good system admiistration - keeping system and pagefile on separate volume, monitoring startup items, keeping AVs in check, not letting caches build up etc.

For what it's worth, I have a whole load of SCSI controllers and drives, and a board that has a PCI-X slot. But for my home system I went for a pair of Velociraptor 300GB drives which I got at a heavily discounted price. They came in just as fast as a 15K Cheetah in all but the most demanding server tests, and ultimately they don't generate the heat or the racket that a SCSI drive will, and the Velociraptors were aimed as an enterprise class product, engineered for server environments with work flat out 24/7 - which is the reason for the price.

As for SSD's I really do stand by what I say regarding their infancy - MTRON units are the most advanced but the price is just stupid.
Give it a year, and the technology and the prices will have matured, but now isn't the time to buy.

I've got a Caviar Green for all my storage needs keeping the other two free of day to day clutter.
I read so much on these forums, like this, and the relaity is that people should stand back and take stock of what they actually want from it.
Like you said above, SCSI on a desktop home pc is very much overkill for what it's going to give the home user.

The same goes for SAS, which I mentioned on the other thread I wrote in, the technology is there yes, but it's all to easy to confuse enterprise class quality and enterprise orientated solutions, and what we have will SCSI and SAS is an enterprise solution of which 99.9% of home users will never see the true benefit of.

Same goes for RAID arrays, how many times have you read in these pages of a raid set going down?, and more often than not it's when someone adds or removes a storage or optical device. Again, enterprise solution designed to be installed and then not tampered with.
We (you) as intusiasts look to constantly update and change, not the environment for enterprise solutions.

If I was going to advise, I like you would dissuade from SCSI for the reasons mentioned.
If you could afford it, i'd use a Velociraptor for system, and keep everything else on good quality well specced 7.2K sata's.
The effort involved with everything else is just not worth it for what you'll get out of it, and onboard controllers are more than most people will ever need.

(oh and I didn't read your first response correctly, about the 10K being old hat - appologies)

If you can link to the benchmarking thread, i'd be interested to read it though.
 
Last edited:
It goes back to the whole cache functionality, remember that caches became common on U320 15K drives. There was none of that on 10K drives. You relied on the NCQ functionality of the controller card. It's what i've been saying, if you look at the single access benchmarks, and the realworld benchmarks, they show that the consumer units are optimised for those uses, look at the results for the server benchmarks like the I/O meter results.

You start moving into the high I/O meters and it's a different story with even 10K scsi roaring ahead, but that said, they were getting the equivalent of 128 simultaneous requests which is where SCSI still rules.

In a desktop environment, you never really harness the true potential of these server class products, and yet, they still come out on top with the average randon access times - which is where the rotational speed comes in.

Again, the advantages don't show in all the 'realworld scenario' tests like farcry and WoWcraft. et al.

SCSI/SAS strength comes from letting multiple access streams work concurrently. That's why they're still the server choice.

Lets take your photo's, what if you hosted your website, on your PC with your 7.2K sata unit, and suddenly 128+ people every second were trying to access images of various sizes and quaity - the sata would cave at the knees. The scsi would cope because that is the strength of the interface.

That's why high end video editing suites still use scsi, and that's why servers and hosts still use it. It's not about theoretical mathematical modelled benchmarking, it's about chucking hundreds of huge chunks of information read and write simultaneously over a constant period of time.

In this respect, your sata has no place in this environment.

When you're just one user nailing a game for all it's worth, loading a level or editing an image, then a simple cache is enough, when you're looking to send a single or small number of streams in one sirection or mix them spradically, then the SCSI's capabilities have no place.

In this respect, the SCSI has no place in the environment.

Like I said, this stuff was designed for a purpose. You'll never harness it's true potential as an ordinary user.

It's like buying thrust 2, and wondering if it's capable of taking the family to wales and towing a caravan ok.
 
Thanks for your input I may go for the Sata again but this time buy newest upto date drive instead of seconhand lol...

If you still want to experiment, you can usually pick up a used Dell Perc 5/i SAS/SATA PCI Express Raid Controller for around £70. 15K SAS drives are also going cheap from time to time.

I've recently added a Perc 5/i along with 2 x Fujitsu 3.5" 15K SAS drives to my main PC, the difference is noticeable and I'm very happy with it. Yes it probably is overkill but it was fun at the time :D

WhiteKnight
 
YOu know DLKnight, it's funny, nearlt all my mates are programmers and business system specialists, and for all the benchmarks and data, they all say the same about SCSI/SAS 15K drives, they all use them in their home rigs.

Also the Dell units are all made by adaptec so the support is great on them. I don't think its overkill if you have an objective in mind, it only becomes overkill when you want to integrate the technology for the sake of it.
 
Also the Dell units are all made by adaptec so the support is great on them. I don't think its overkill if you have an objective in mind, it only becomes overkill when you want to integrate the technology for the sake of it.

The Perc 5/i is a rebadged LSI MegaRAID SAS 8480E - look here.
 
The Perc 5/i is a rebadged LSI MegaRAID SAS 8480E - look here.

Superb, i'll check out the auction sites for one. I had some U320 PCI-X133 scsi controllers,and they were dell branded adaptec units so I thought they were their supplier of choice.

Have you had any problems with it, so far, and how have you found driver availiability?
 
No idea on driver availability but the cards are known to be picky about the board they're using. They really don't like PCIe slots which have a variable lane width based on what is in the other slots - for example Intel P35 & P45 boards.
 
Have you had any problems with it, so far, and how have you found driver availiability?

No problems at all the drivers and LSI firmware/BIOS are contained within the link above or here. Plus the management program is awesome.

No idea on driver availability but the cards are known to be picky about the board they're using. They really don't like PCIe slots which have a variable lane width based on what is in the other slots - for example Intel P35 & P45 boards.

There is a mainboard compatiblity thread here - http://forums.2cpu.com/showthread.php?t=84882, I had read reports that it didn't work with my motherboard (P5N-E SLI) but after blanking pins 5 & 6 on the PCI-E connector the card works fine (see http://www.overclock.net/hard-drives-storage/359025-perc-5-i-raid-card-tips.html)
 
Back
Top Bottom