ecommerce

What is better for a server: SCSI or SATA?

Recommended Posts

WD offers SATA drives to last 1,200,000 hours for server, and for that reason we were using them.

We have many raid web servers with a high working load. Our experience is that SATA drives always fail within 6 months of high working load.

Now we have decided to switch back to SCSI.

My question is: Why manufacturers like WD continue lying to people by saying that their sata drives are built for servers?

Share this post


Link to post
Share on other sites

I'm not gonna defend SATA/EIDE to have just as high quality as SCSI drives... because that's simply not true.

However... do you have adequate cooling for your SATA drives too?

These drives are cool running... but little or no cooling will make the cool running drive get real hot if it is working under heavy loads. I tried this at home... and my WD 160GB got so hot i couldn't hold it in my bare hands after 3-4 hours of work.

Just a thought.

Why WD is lying... i don't know... maybe theyre not.. maybe they are... or maybe they just hav ehigh expectations for their enerprise SATA drives :)

Share this post


Link to post
Share on other sites

Since all them (over 20 different servers) have experienced the same failure, I do not believe that mishandling/abuse during shipping/storage is the reason of this constant failure - all of them failed within 6 months. We have bought different models of WD sata drives at different times and from different suppliers - even directly from the manufacturer.

I think that the manufacturer should be more responsible when disclosing product specifications. Sorry, but according to our experience WD sata technology is not suitable for servers.

Share this post


Link to post
Share on other sites

I've had better luck with SCSI drives, but maybe it is because of the way I treat them? ;)

From a review of a 4-bay SATA controller and case:

"At the end of 90 minutes the Maxtor installed in the bottom slot reported 127.4 degrees Fahrenheit, the second Maxtor one slot higher was 127.4 degrees, and the next higher Seagate 300 was 122 degrees.

The next process of the cooling test was to leave the enclosure turned on with the hard drives mounted for an hour, but with no usage other than temperature monitoring. I wanted to see how well the hard drives might cool down inside the SeriTek/2eEN4 enclosure. After resting for an hour, the bottom Maxtor temperature was 118.4 degrees, the Maxtor above it was 120.2 degrees and the Seagate 300 was 118.4 degrees Fahrenheit."

127.4F=53C

120.2F=49C

... running the SeriTek/2eEN4 enclosure with only two or three hard drives installed allows the drives to operate between 5 and 13 degrees cooler. The removal of a drive tray or two reduces heat and allows the air from the fans to circulate much better

Long term environment of high temps makes me wonder about long term reliability.

Amug: SeriTek/2eEN4

Which is one reason I would recommend SCSI-quality enclosures. A larger tower with more airflow would help, one designed for 10K SCA SCSI drives. And possible using drive coolers.

I know a lot of people using Raptors have found them to be very reliable, but they also treat them well. Some enclosures specifically state that they are not designed for 10K drives.

Share this post


Link to post
Share on other sites
I've had better luck with SCSI drives, but maybe it is because of the way I treat them? ;)

From a review of a 4-bay SATA controller and case:

"At the end of 90 minutes the Maxtor installed in the bottom slot reported 127.4 degrees Fahrenheit, the second Maxtor one slot higher was 127.4 degrees, and the next higher Seagate 300 was 122 degrees.

The next process of the cooling test was to leave the enclosure turned on with the hard drives mounted for an hour, but with no usage other than temperature monitoring. I wanted to see how well the hard drives might cool down inside the SeriTek/2eEN4 enclosure. After resting for an hour, the bottom Maxtor temperature was 118.4 degrees, the Maxtor above it was 120.2 degrees and the Seagate 300 was 118.4 degrees Fahrenheit."

127.4F=53C

120.2F=49C

... running the SeriTek/2eEN4 enclosure with only two or three hard drives installed allows the drives to operate between 5 and 13 degrees cooler. The removal of a drive tray or two reduces heat and allows the air from the fans to circulate much better

Long term environment of high temps makes me wonder about long term reliability.

Amug: SeriTek/2eEN4

Which is one reason I would recommend SCSI-quality enclosures. A larger tower with more airflow would help, one designed for 10K SCA SCSI drives. And possible using drive coolers.

I know a lot of people using Raptors have found them to be very reliable, but they also treat them well. Some enclosures specifically state that they are not designed for 10K drives.

The WD sata drives colapsed in a very good cooling environment / full size rack-mounted with 2 fans at front and other two other rear fans. In the other hand, SCSI drives (Cheetah) using the same type of case work after 2 years without colapsing. So I do not believe that cooling was the problem.

Share this post


Link to post
Share on other sites

WD offers SATA drives to last 1,200,000 hours for server, and for that reason we were using them.

Which drive were you using? And did WD claim it supports a 100% duty/seek cycle?

We have used several enterprise drives with raid support:

WD Caviar RE Model WD2500SD, WD Caviar SE Model WD2500JS, and the latest WD Caviar RE2 400 GB Hard Drives

Share this post


Link to post
Share on other sites

Which drive were you using? And did WD claim it supports a 100% duty/seek cycle?

Looks like they spec 1.2M at 100% duty cycle according to this spec:

http://www.westerndigital.com/en/library/s...2879-001136.pdf

The reason that I've created this thread is because I feel that WD specs are not true. A full duty cycle for a server drive is very demanding - and that is what WD offers.

My company has bought many of their drives based on their specs. Now, why do their drives always fail so quickly? It is a fact that the investment that my company has made is a complete loss. But what is even worst is that every time that a server fails we have to spend a lot of time and money in rebuilding the servers.

It is true that WD will honor a 5 year warranty. They’ll give you a replacement disk (a recertified drive) through a RMA, but that wouldn’t make up the nightmare that you’d go through.

NOTE: My experience is with WD, it doesn’t mean that any other SATA drive is Ok.

We have decided to end with SATA and use SCSI only.

Share this post


Link to post
Share on other sites

This smells a lot like you've built those servers yourself. If you did I'm not in the least bit sorry for you. There are far, far better choices than building your own server. I actually can't find one good reason except in the rare case you need a very specific hardware configuration. Buy IBM. Buy HP. Even buy Dell. That way you'll get a system that has been tested and is supported and is guaranteed to work with supported parts of whatever vendor. (I know IBM even supports configurations with vertain non-IBM hardware although they won't replace defective third-party hardware of course). If you buy a system like that and it doesn't work you know who to be angry at. If you do have such a brand name server, I don't know why you would get your hard drives from somewhere else.

So, last post of HisMajestyTheKing.

Share this post


Link to post
Share on other sites
This smells a lot like you've built those servers yourself. If you did I'm not in the least bit sorry for you. There are far, far better choices than building your own server. I actually can't find one good reason except in the rare case you need a very specific hardware configuration. Buy IBM. Buy HP. Even buy Dell. That way you'll get a system that has been tested and is supported and is guaranteed to work with supported parts of whatever vendor. (I know IBM even supports configurations with vertain non-IBM hardware although they won't replace defective third-party hardware of course). If you buy a system like that and it doesn't work you know who to be angry at. If you do have such a brand name server, I don't know why you would get your hard drives from somewhere else.

So, last post of HisMajestyTheKing.

We just had an IBM server in... Xseries... el cheapo... wich originally had a Seagate SATA in it. The drive was dead... but thats not the point. Even known brands use SATA drives in their servers.

Not a good thing though... and i guess u get what u pay for... just pointing it out :)

Share this post


Link to post
Share on other sites
The King is Dead. All hail the King!

Hey hey, my my; the King is dead, but he's not forgotten.

This is the story of Johnny Rotten, Rock N Roll will never die!

He's temporarily on a dialup connection (which is what I'm almost always on...friggin girlie mon he is, spoiled children of the modern age), so he'll only be reading... just like ddrueding, maybe a post or two now and then...got to put up something 'interesting' to entertain him more in the B&G. Hmm, I think I can do that ;-)

Share this post


Link to post
Share on other sites

What commonly refered to as MTBF is nothing but a probability test on infant motality of a product, not an endurance test.

If you design the server yourself, you probably should make the software flexible enought that when a drive fail it can be recovered easily. All drives can fail and will eventually fail, so your focus should be on how to make the "hot spare" kick in and rebuild quickly.

So you have to figure out whether you should write it off by moving to another brand, or keep using WD until warranty went out. I guess if you have enough of them and have RMA a few times, you can ask your vendor to return your purchases or replace them with another brand (or pay a bit extra for SCSI).

Good luck.

Share this post


Link to post
Share on other sites
What commonly refered to as MTBF is nothing but a probability test on infant motality of a product, not an endurance test.

If you design the server yourself, you probably should make the software flexible enought that when a drive fail it can be recovered easily. All drives can fail and will eventually fail, so your focus should be on how to make the "hot spare" kick in and rebuild quickly.

So you have to figure out whether you should write it off by moving to another brand, or keep using WD until warranty went out. I guess if you have enough of them and have RMA a few times, you can ask your vendor to return your purchases or replace them with another brand (or pay a bit extra for SCSI).

Good luck.

In fact, we always use raid 5, and when a drive dies the spare takes over. The most important issue is that none of the drives have lasted more than 6 months – when they suppose to last 5 years. We have also experienced two drives failing one after the other, and there was no time to activate the spare one. Maybe the type of use that we gave to our sata drives was very demanding and killed them earlier. Being that the case, wd sata drives are not suitable for servers because they don’t last.

Share this post


Link to post
Share on other sites

My short answer would be Seagate.

A suggestion for lowering administration hassle would be diskless servers and xen; assuming you're not using windows.

Share this post


Link to post
Share on other sites

Most of the servers I've worked with are mostly Seagate SCSIs. I know a few which was labeled IBM eServer drive but in reality they are Seagate Cheetahs (small print and drive ID gave it away). Drives such as Maxtor/Quantum Atlas or Fujitsu ones are very rare indeed. Some of our recent servers from Dell uses SATA though, and again mostly Seagates and they are doing fine running 24/7. Only one odd Dell server uses Hitachi/IBM drive though.. AFAIK, most of the drives used for servers very rarely fail.. :ph34r:

Share this post


Link to post
Share on other sites

I'd just like to place this thread in context.

User claim: every WD drive the poster has used has failed within six months -- *every single one*.

User conclusion: WD drives are crap and WD lie about their MTBF number.

If every single one failed within six months, what is the more likely condition:

(a) Something is wrong with WD drives and they are crap. Anyone using them for 24/7 operations will have nearly 100% of drives fail within six months.

(B) Something is wrong with the power, systems, handling, or environment of the drives that is causing 100% to fail within 6 months at this location.

I'll leave it to the rest of the readers to draw their conclusions as to the reliability of the user's conclusion about WD drives.

Share this post


Link to post
Share on other sites
I'd just like to place this thread in context.

User claim: every WD drive the poster has used has failed within six months -- *every single one*.

User conclusion: WD drives are crap and WD lie about their MTBF number.

If every single one failed within six months, what is the more likely condition:

(a) Something is wrong with WD drives and they are crap. Anyone using them for 24/7 operations will have nearly 100% of drives fail within six months.

(B) Something is wrong with the power, systems, handling, or environment of the drives that is causing 100% to fail within 6 months at this location.

I'll leave it to the rest of the readers to draw their conclusions as to the reliability of the user's conclusion about WD drives.

The right answer is (a) for the following models:

WD Caviar RE Model WD2500SD, WD Caviar SE Model WD2500JS, and the latest WD Caviar RE2 400 GB Hard Drives

I don't know about other models...

Most of the servers I've worked with are mostly Seagate SCSIs. I know a few which was labeled IBM eServer drive but in reality they are Seagate Cheetahs (small print and drive ID gave it away). Drives such as Maxtor/Quantum Atlas or Fujitsu ones are very rare indeed. Some of our recent servers from Dell uses SATA though, and again mostly Seagates and they are doing fine running 24/7. Only one odd Dell server uses Hitachi/IBM drive though.. AFAIK, most of the drives used for servers very rarely fail.. :ph34r:

As you say: 'Most of the servers I've worked with are mostly Seagate SCSIs' - must be a reason for that...

Share this post


Link to post
Share on other sites

We just had an IBM server in... Xseries... el cheapo... wich originally had a Seagate SATA in it. The drive was dead... but thats not the point. Even known brands use SATA drives in their servers.

Not a good thing though... and i guess u get what u pay for... just pointing it out :)

Depends on what you use that server for. SATA drives aren't necessarily a bad solution and I've have SCSI drives DOA just as well, regardless of actual brand. Over the past few years I've seen most hard disk brands in IBM servers and desktops - Seagate, Quantum, Maxtor, IBM/Hitachi and Fujitsu. Depends on what conditions they get rather than sheer performance or perceived reliability I'd guess. Personally I wouldn't mind sticking a couple of SATA drives in a file server under light load, like we have so many clients. I'd be surprised a small office with say a single file/print server and half a dozen PC's would notice the difference between 7200 rpm SATA and 15k SCSI drives in the server. It's just that SCSI still is better supported and is more flexible.

Share this post


Link to post
Share on other sites
As you say: 'Most of the servers I've worked with are mostly Seagate SCSIs' - must be a reason for that...

No more sinister reason than buying your hardware at a good price where it's available. I can tell you what's the most likely brand of hard drive you'll find in IBM Netfinity 3000/3500 and xSeries 200/205/206/220/225/226 servers for the past several years. It probably also varies on your location. Lexwalker said most SCSI drives in IBM servers he encountered were relabelled Seagates but I've seen at least as many old IBMs, a good number of Fujitsu's and in the last few servers Maxtors. I frankly don't care about the actual brand as long as you're not stuck with a bad series like the somewhat troublesome UltraStar 36LZX a few years back. A fair number of those needed their firmware upgraded for some reason I don't remember.

Share this post


Link to post
Share on other sites

You either 1) got a bad batch of HD's, 2) your HD enclosure sucks, or 3) something is wrong with your shipper. FYI: it don't matter how many fans you've got em', if they're trying to blow air through a bunch of HD's stacked on top of eachother 1 or 2mm apart they're going to get hot as hell. Post what temps they're at...

Frankly I've seen horror stories like this one all over the place and hear about it everytime I go to a store and talk to someone there or a salesperson, "don't buy brand X, Y, or Z they're all crap, trust me man...". Sometimes its Seagate, sometimes its Maxtor, and I guess this time its WD.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now