Jump to content


Photo

Reliability Above 2TB? Cloud storage co. and reviews raise questions


  • You cannot start a new topic
  • Please log in to reply
21 replies to this topic

#1 jonnyz2

jonnyz2

    Member

  • Member
  • 19 posts

Posted 27 November 2013 - 02:00 PM

I was just speaking to a major cloud storage company that offers a seed drive service (to get started with large amounts of data).  They mentioned that they don't use seed drives larger than 1TB for reliability reasons.  In particular, the tech guy told me they find drives at 3tb or above to be signficantly less reliable than smaller drives. 

 

That annecdote fits with what I observe reading online reviews...I've looked at a LOT of 3TB drive reviews (from every manufacturer Newegg sells) and see that most 3TB drives get around 50% 5 star ratings (lower than I'd like to see) with many drives getting 18-25% 1 star ratings which usually site DOA, short term failures, and/or data curruption.  The failure rates being reported on these drives (in some cases with 700+ reviews) seems quite high. 

 

So, I'm wondering, since drive reliability is key for me, whether I should stay with 2tB drives.  This will be kind of a pain since I'm already running out of space on a 2x2TB mirror setup (lots of large photography files). 

 

So, while I'm looking for new drives for photo storage, I would rather buy 4x 2TB if that would be more reliable than 2x 4tb (just as an example).

 

I'd appreciate your thoughts!


Edited by jonnyz2, 27 November 2013 - 06:27 PM.

#2 Kevin OBrien

Kevin OBrien

    StorageReview Editor

  • Admin
  • 1,355 posts

Posted 27 November 2013 - 04:22 PM

I'm not sure I've seen the same reflected in terms of failed drives in our lab. In the pile of dead drives I have (mix of manufacturers) there are no enterprise models and almost all low-power consumer 3.5" models. Those include 2TB and 3TB HDDs, which all showed failure quickly in their life. We have a couple nearline 2TB SATA (7200RPM) that died all in the same group... all having bad sectors than drive losing data. That particular one was all drives in the same model group.

 

Looking at all nearline SATA or SAS enterprise models in the lab (not part of the above mentioned drives) I have one thats been stable at 1 bad sector for about 2 years now, and the rest are all cranking along like the day they first came in. Examples include the Hitachi 7K4000 SATA drives that are 4TB and the 4TB WD SAS RE4. We also have a number of 3TB and 4TB NAS drives in action... which out of about 24 different drive samples we had one recently fail SMART. That batch of drives has roughly 1,110 hours on them in a NAS we use for security cameras in the office.

 

Without knowing the exact type of drive the cloud storage company is using for seed drive service its very hard to understand why they don't want to use anything above 1TB. My guess is they are using off-the-shelf consumer external drives for this service (I really hope not 2.5" drives) and they are seeing a ton of impact or shipping related damage get the better of them. I personally haven't used any of these services to understand if its a bare drive, a USB/firewire enclosure combo, or maybe a RAID1 external unit being shipped to customers.


#3 jonnyz2

jonnyz2

    Member

  • Member
  • 19 posts

Posted 27 November 2013 - 06:18 PM

Hi Kevin,

 

Thanks for sharing your experience! 

 

I have no idea what sort of drives they are shipping and agree that their needs may be more related to shipping than the sort of use I'm thinking of.

 

My use is not high on-time or high volume read/writes but occasional use of a photo processing workstation where I will generally write/read large files (20MB to 1GB (if multi-image stiched file)) while working on an image and once I've finished with a given image I may occassionally access it and other images one or more times in the future.  

 

My current setup in an SSD for OS and "User" files and a raid mirrored pair (2x2tb) for my photo file storage.  I also backup that mirror (with an identical drive) and take it offsite for offsite storage.  For the latter, I tend to do this after trips or major processing sessions which is ok but not really a robust offsite storage approach.

 

When I started looking into new drives I was thinking regular consumer drives (e.g. TOSHIBA PH3300U-1I72 3TB for $110 which receives better than average reviews). 

I've also looked at higher end drives and they seem to be 2-3x more expensive (e.g. WD Black 4TB at $270 or Deskstar 7K4000 3TB at $260) and, depending on the exact drive, some of the reviews don't seem to be much better than the lower cost drives. 

 

I'm not keen on spending two or three times more unless the drives are truly better.   At the same time, I don't want to be "penny wise and pound foolish" - after all, I do want a reliable setup for protecting important files!

 

Again, everyone's thoughts/ideas are appreciated!  Thanks!


Edited by jonnyz2, 27 November 2013 - 06:24 PM.

#4 dietrc70

dietrc70

    Member

  • Member
  • 104 posts

Posted 30 November 2013 - 05:02 PM

For transferring data, I wonder if good laptop drives would be best.  Shock resistance is an essential feature for a laptop drive while it might be an afterthought in the design of even an expensive desktop drive.

 

Many users on forums and reviews have wondered how much shipping abuse contributes to DOA or other drive failure.


#5 FastMHz

FastMHz

    Member

  • Member
  • 400 posts

Posted 01 December 2013 - 01:19 PM

I'd be willing to bet shipping abuse is > 90% of DOAs.  USPS, FedEx, UPS, DHL *all* toss packages around without thought.

 

Also, for critical data I only use a maximum of 2TB drives.  The larger ones, a set of 4 4TB ones in my portable NAS, are just a 3rd copy.  I don't trust them....yet.


Edited by FastMHz, 01 December 2013 - 01:20 PM.

Production: Vishera 8350/32gb RAM/Dual SSD/VelociRaptor/Radeon 7750
Gaming: Phenom II 955/16gb RAM/SSD/VelociRaptor/Radeon 7950
Retro: K6-2 550/256mb RAM/160gb HDD/CompactFlash/3DFX/ATI AIW Pro/SB16/DB50XG
http://www.fastmhz.com

#6 continuum

continuum

    Mod

  • Mod
  • 3,467 posts

Posted 02 December 2013 - 05:17 PM

I doubt it's that high. Internal handling problems during the picking process (before drives are packed and shipped) are a huge contributor-- properly packed drives should handle shipping just fine.


#7 anywhere

anywhere

    Member

  • Member
  • 37 posts

Posted 06 December 2013 - 08:10 AM

Reason I haven't jumped on 3tb drives yet. I'm not sure if the extra 1tb for the same $100 bill is worth the headache of chasing down duds, stressing other array components from rebuilds, etc.

Then again, as mentioned maybe reviews really are that badly statisticfied? That a word?

Sent from my rooted HTC Supersonic using Tapatalk 2 Pro


#8 dietrc70

dietrc70

    Member

  • Member
  • 104 posts

Posted 07 December 2013 - 01:45 PM

My impression is that the expensive enterprise 3tb and 4tb drives are fine, but at lower price points it would be better not to go above 2tb right now.

 

It's also good to remember just how long it takes to fill, backup, or rebuild a 3 or 4 TB drive, especially a slow consumer model.  Two 2TB drives are much easier to to deal with and usually cheaper than one 4TB.


#9 MRFS

MRFS

    Member

  • Member
  • 190 posts

Posted 08 December 2013 - 03:10 PM

Forgive me if I repeat any of the fine points already made above.

 

I was very disappointed recently when a WDC 2TB enterprise-class HDD failed

with less than 3 years of normal use in a production workstation:

 

Newegg had sold it as brand new, entitled to a 5-year warranty,

but WDC's warranty database showed it as "REFURBISHED"

when I tried to request an RMA.

 

Happily, Newegg has agreed to replace it with a brand new one:

I'm hoping and expecting that the replacement will also be brand new

with a full 5-year warranty:  time will tell.

 

In light of some of the serious downsides all of you have reported here,

one option that came to my mind, in light of those failures,

is a RAID 6 setup managed by a decent RAID controller

with automatic or semi-automatic recovery.

 

I don't use RAID 6 arrays myself, but my guru in San Diego -- Roger Shih

at Micro City, Inc., tells me that 2 member drives can fail at the same time

and a good RAID 6 controller can still recover from that failure. 

He has built rack-mounted servers using Areca controllers

and 12 x connected drives.

 

I also cannot stress enough the importance of proper temperature

and dust control, and reliable and quality input power:  all of our

workstations are powered by APC battery backup units;  and,

I'm happy to say that we only recently disassembled a legacy PCI

system, after it had run very well for 10 YEARS with a combination

of PATA and SATA HDDs.

 

And, with very little experience beyond 4 x Toshiba laptop drives

that we bought several years back, I can say that we've moved

those around quite a bit, and all 4 are still running now in

JBOD mode with no bad sectors:  I am pleasantly surprised

by their longevity.

 

WDC's latest 2.5" VelociRaptors could be a very functional compromise,

particularly if they are housed in a modern 4-in-1 enclosure like the

excellent designs I'm seeing from IcyDock:  the latest VRs also support

a 6G SATA-III interface, and are rated at 200 MB/second.  As such,

builders are looking at 4TB in each 5.25" form factor -or-

a rack-mounted enclosure with dedicated 2.5" bays.

 

That VR model number at WDC is: WD1000CHTZ

 

Hope this helps, and thanks for this candid discussion.

 

 

MRFS


#10 MRFS

MRFS

    Member

  • Member
  • 190 posts

Posted 08 December 2013 - 04:14 PM

p.s.  One more thing, and I don't mind if you chuckle a little at this suggestion:

 

Several years ago, when we were hit with a very bad virus that spread

across our LAN, I gave a lot of thought to the most cost-effective way

to minimize the probability of any future damages of that nature.

 

Believe it or not, we couldn't ignore the FACT that a PC that is turned OFF

cannot be infected with any virus or malware.  Duuuh!

 

So, we started building cheap "storage servers" with very little software

and lots of HDDs that filled up all available dive bays in each chassis.

 

Once we put those cheap storage servers into operation, it also became

rather obvious that we were only running them long enough to do a

routine XCOPY backup of all our key databases and drive images.

 

Windows XCOPY works just fine across a LAN too, both "pulling" and "pushing".

 

Once that backup task was done, there was no need to keep running

those storage servers, chiefly because they were NOT being used

for any other production tasks.

 

Consequently, the run time that accumulates on those storage servers

is really an absolute minimum, and THAT in turn justifies the statistical

expectation that they will keep running for a long time.

 

Recall above where I mentioned that one legacy PCI system ran for

10 YEARS before we finally disassembled it.  It was still running AOK;

but, the PCI chipset was no longer useful for our R&D and patent-related

research.

 

I truly believe that such longevity can be expected of such mostly "off-line"

storage servers, particularly if they are turned OFF immediately after

the backup tasks are done.


Edited by MRFS, 08 December 2013 - 04:17 PM.

#11 jonnyz2

jonnyz2

    Member

  • Member
  • 19 posts

Posted 08 December 2013 - 07:46 PM

MRFS and others....

 

So, if you are in my situation where data security is paramount, would you use 2x mirrored arrays at 2TB each mirror, splitting data between the two arrays manually,  or would you use a raid 10 setup for a total of 4tb and no need to split data? 

 

While I'm aware that I could have a raid 6, I decided a few years back that doing so in a SOHO environment, w/o adequate drive monitoring and alerts was complicated and risky as two drives might fail and I might not be aware the failure in time.  I suppose though that same risk applies to raid 10.

 

Whatever I end up doing, it will have to be configurable on an Intel Z87 platform with RST or other software to leverage the Z87 platform (max 6 drives).

 

Advice appreciated.  Thanks!


#12 MRFS

MRFS

    Member

  • Member
  • 190 posts

Posted 08 December 2013 - 09:02 PM

Once again, when you mention "an Intel Z87 platform with RST"

I read that phrase to imply that you want to maximize data integrity

with a single motherboard.  If you look around at on-line retailers

like Newegg, there are tons of refurbished and low-end PCs

that will buy you completely independent "mirrors" by

backing up all your data on a dedicated storage server

as often as your needs require.

 

Just compare the cost of a low-end motherboard e.g. G41 chipset,

and low-power CPU, against the cost of losing valuable data entirely:

what is the true time/labor cost of replacing the data you lost?

 

That cost can be enormous, for example, if you have saved

only 1 copy of your latest drive image, and the storage device

hosting that drive image fails?  What then?  Do you have the

time to do a fresh install of the OS and then all additional software?

How much is an hour of your time worth?

 

We have storage servers running AOK with old Intel D 945 CPUs

and LGA775 sockets:  I think we paid our guru about $20 recently

to ship 5 of those, because I know I sent him $100, and that

covered shipping too.  Yes, the D 945 does not support virtualization,

but the storage servers I'm talking about will never need that feature.

 

With twin systems, then you are better positioned to build

your primary SOHO system with enough redundancy

to get you through the occasional HDD failure.

 

And, it's always a good idea to configure a "mirrored" RAID

using a RAID mode with which you already have some experience.

 

Just PLAN on a HDD failure, because you KNOW that

all storage devices fail at some point e.g. just the other day

I read that some retail SSDs which experience sudden power

losses are losing ALL of their data i.e. total loss and the

SSD is NOT recoverable either.  Heck, you won't see any

mention of THAT error in any SSD marketing literature!

 

The other things that I constantly stress, because they are

relatively cheap and they buy you a LOT, are proper cooling

and ventilation of your storage subsystem, and of course

all of your systems should be powered by a quality UPS.

 

LaserPrinters are a well known exception, because

they draw too much power to heat up properly.

 

Lastly, we've had lots of success with RamDisk Plus from

SuperSpeed:  we've moved all of our browser caches

to a 12GB ramdisk on our primary workstation: 

 

in the long run, given all of the work we do on the Internet

during any given work day, hosting those "cached" files

in RAM has relieved the HDDs we do have from all of that

wear and tear, resulting in greater longevity.

 

 

Finally, in case you don't already know about this,

some rotating platters like Western Digital's "Black" series

can consume a lot of time doing internal error checking & recovery;

and some RAID controllers will drop them from the array

because they are NOT responding quickly enough to

routine "polling" by those controllers.  Because I don't

have much experience with Seagates, I strongly recommend

that you stick with Western Digital's RAID Edition HDDs,

and buy ONLY the ones with 5-year factory warranties.

 

Compute price per warranty year, and the 5-year warranties

usually come out ahead on that metric.

 

All of WDC's RAID Edition HDDs now come with a feature

known as TLER -- time-limited error recovery:  this keeps

a RE4 HDD in touch with the RAID controller by responding

timely to polling requests whenever they are issued by the

RAID controller.  Do NOT try to build a RAID array with

WDC's "Black" Edition HDDs, although I did read some

confirmation recently that WDC's "Red" NAS Edition HDDs

now support TLER.  However, I just checked and confirmed

that WDC's 1TB "Red / NAS" only has a 3-year warranty

(model WD10EFRX).

 

 

Hope this helps.

 

 

MRFS


Edited by MRFS, 08 December 2013 - 09:11 PM.

#13 dietrc70

dietrc70

    Member

  • Member
  • 104 posts

Posted 09 December 2013 - 03:38 AM

MRFS and others....

 

So, if you are in my situation where data security is paramount, would you use 2x mirrored arrays at 2TB each mirror, splitting data between the two arrays manually,  or would you use a raid 10 setup for a total of 4tb and no need to split data? 

 

While I'm aware that I could have a raid 6, I decided a few years back that doing so in a SOHO environment, w/o adequate drive monitoring and alerts was complicated and risky as two drives might fail and I might not be aware the failure in time.  I suppose though that same risk applies to raid 10.

 

Whatever I end up doing, it will have to be configurable on an Intel Z87 platform with RST or other software to leverage the Z87 platform (max 6 drives).

 

Advice appreciated.  Thanks!

 

I use RAID 10 with hardware RAID cards (Adaptec 6805E) with Supermicro 5-in-3 hotswap enclosures.  I could also configure a hot spare, which the controller would automatically spin up and rebuild to in the event of a failure, with no user intervention needed.

 

If a drive fails, the hardware card makes an earsplitting alarm noise.  If the hotswap enclosure fan fails or starts to overheat, the enclosure makes an earsplitting alarm noise and flashes red lights.  You can also configure the RAID software to notify you by email of any problem.

 

I use RAID 10 because it is simple, has fast write performance and excellent fault tolerance.  A RAID 10 array can withstand multiple drive failures, and rebuilds only take a few hours (no parity to calculate).  RAID 10 is also ideal for working files (like a photographers) because it has high write speeds.  Decent write performance on RAID 5 or 6 requires a battery backed cache, and such controllers are far more expensive than the entry-level RAID 10/1/0/1E capable controllers.  I think RAID 5 should be avoided, and that RAID 6 only makes sense for dedicated file servers.

 

I would recommend a hardware controller over Intel's built in RAID.  Intel RST isn't bad, but it is far slower in my experience.  With a hardware controller you can move the whole array to another machine in the event of a failure, all the functions are OS independent, and you get perks like onboard failure alarms and much higher quality SFF-8087 cable connectors.

 

RAID is not backup, of course.  I use Crashplan to perform cloud backup of my most important data.


Edited by dietrc70, 09 December 2013 - 04:03 AM.

#14 jonnyz2

jonnyz2

    Member

  • Member
  • 19 posts

Posted 09 December 2013 - 12:13 PM

Thanks for the TLER tip.  I had read about the drive setting but not that WD Black could have a problem. 

 

I agree with your point about time having real value and I don’t mind changing plans.  Let’s me share more of the situation background so that everyone can better understand my needs. 

 

I built a NAS before they were readily available for SOHO use.  This worked fine for everything except photo processing.  The combined throughput of the NICs, network, drives etc. was fairly poor and I discovered somewhat late that it was dramatically reducing working speeds with larger photo files when I was processing on my workstation and saving to the NAS. 

 

I asked for advice on upgrading my machine and several photo experts suggested first moving the image storage, at least for working files, back to the workstation itself before looking at CPU/MOBO upgrades.  They also suggested an SSD for OS, page file, etc.

 

They were right!  I made both changes some time ago and it provided a huge improvement for opening files, working on files (due to paging), and saving files.    

 

To your point about RAM and a RAM disk, I will shortly be setting up a new workstation with 32gb ram which should provide more than enough space to fully process the largest files in memory w/o any Photoshop disk paging. 

 

Because of my previous experience with the NAS, I have felt that moving my images back to a NAS would be step backwards in terms of working time.  Now, if my work style was one image at a time, it would probably not be a big deal to work on it on the workstation and final save to the NAS. 

 

However, on a given shoot/trip, I might come back with several thousand images.  In the past, that has been somewhere between 40-80GB for each image set. 

 

By the way, that’s in compressed raw format of about 14MB per image.  As soon as an image is opened in Photoshop and resaved, it will hit at least 40MB.  It’s not uncommon though for a file to be 100-300MB and in some cases over a GB for large stitched panorama files.  Fortunately, I typically will only process about 5% of the images in a given set with Photoshop; so, most will stay around 14MB.  The size of the files and the size of the entire sets will at least triple if I upgrade my camera to one of the larger sensor cameras that are currently available. 

 

I tend to work on the entire set of images for a period of time.  Because of that, I want those images on a fast read/write storage platform, with data redundancy.  I have assumed this means working on the set on HDDs or SSDs directly attached to the Mobo.  If that assumption is correct, then midterm storage is definitely workstation based. 

 

The next question is one of long term storage.  Do I just keep the images in a large Raid 1 on the workstation or migrate an entire image set to a NAS after processing? 

 

I’m not sure though that a second machine solves the single point of failure question as the NAS also has a single point of failure with its MOBO. 

 

Or is the suggestion to use Raid 1 in the workstation AND back up everything to a Raid on the NAS? 

 

As mentioned in one of my first posts at the top of the thread, I’m also doing some offsite backup as well.  In addition to that, I’m considering starting with a cloud storage company for a better offsite backup program. 

 

So, I now have more questions than I started with.  This is really raising questions about the best working storage strategy, longer term storage and backup strategy, AND which Raid level to use for the workstation and/or for the NAS.

 

Again, any advice will be appreciated!

 

 

 

 


Edited by jonnyz2, 09 December 2013 - 11:14 PM.

#15 MRFS

MRFS

    Member

  • Member
  • 190 posts

Posted 09 December 2013 - 12:49 PM

>  I will shortly be setting up a new workstation with 32gb ram which should provide more than enough space to fully process the largest files in memory w/o any Photoshop disk paging

 

RamDisk Plus from SuperSpeed has a feature which saves the ramdisk's contents at shutdown

and restores it at startup:

 

http://superspeed.co...top/ramdisk.php

 

 

I'm using 4 x Hitachi 15,000 rpm 2.5" SAS HDDs presently to save that ramdisk at shutdown,

and it's been running around 600 MB/second for this task.  I'd like to upgrade that subsystem

to 4 x Plextor M5P Extreme SSDs, because they do excellent garbage collection withOUT TRIM:

 

http://www.xbitlabs....nh_5.html#sect0

(cf. scores "After 30 Min. Idle")

 

 

If you go with a Z87 chipset, TRIM now works with RST on a RAID 0 array controlled by the chipset e.g.:

 

http://www.newegg.co...N82E16813131978

 

http://www.rwlabs.co...62&pagenumber=1

(test was done with ASUS P8Z77-V PRO LGA 1155 Intel Z77)

 

So, that would be a good place to save and restore a ramdisk's contents w/ RamDisk Plus --

to accelerate startups and shutdowns.

 

 

This next ASUS Z87 has 2 x Thunderbolt ports, if you are interested in

exploring that option for external storage:

 

http://www.newegg.co...N82E16813131987

 

 

Also, I've tried Windows NTFS compression with RamDisk Plus, and it also works;

this might be a useful tradeoff if your CPU is a powerful multi-core model.

 

I remember from my image processing days, many years ago, that

graphic images are often very compressible.


#16 MRFS

MRFS

    Member

  • Member
  • 190 posts

Posted 09 December 2013 - 12:58 PM

something to watch out for:

 

https://communities....om/thread/44110

 

so there's an inconsistency between sandforce 2281 controller and z87 chipset under windows 8.

 

...

 

It is specifically related to the controller in the SSD itself since the Intel® Z87 Chipset provides TRIM to the Samsung* SSD.

 

[end quote]


Edited by MRFS, 09 December 2013 - 12:59 PM.

#17 MRFS

MRFS

    Member

  • Member
  • 190 posts

Posted 09 December 2013 - 02:54 PM

FYI:  Samsung's White Paper:  Understanding SSD System Requirements
 

 

http://www.samsung.c...itepaper02.html

 

http://www.samsung.c...equirements.pdf

 

[begin excerpt]

 

RAID, which stands for Redundant Array of Independent/Inexpensive Disks, is a type of storage system in which a number of drives (at least 2) are combined into one logical unit. RAID is used to improve performance, improve reliability, or some combination of these two. Data can be distributed among the drives in a RAID array in one of several ways (called RAID levels). The most common RAID levels are RAID 0 and RAID 1.

 

With the introduction of its 7 Series Chipsets and the latest Intel Rapid Storage Technology (IRST) drivers (11.0 or later), Intel is now fully supporting SSD technology, including the TRIM maintenance command, in RAID 0 arrays.

 

In the past, the lack of TRIM for RAID 0 was a source of frustration, as the performance improvements initially gained through the RAID array were mitigated by the performance deficits caused by the lack of TRIM. Thus, with the addition of TRIM support for RAID 0, it is useful to understand RAID technology and who (and why) an individual might choose to use it.

 

[end excerpt]

 

 

Also NOTE WELL that some users are reporting the loss of TRIM support

with SandForce SSDs as members of a RAID 0 array managed by Intel's RST

(Rapid Storage Technology).

 

Look before you leap (ahead)!


#18 MRFS

MRFS

    Member

  • Member
  • 190 posts

Posted 09 December 2013 - 03:28 PM

FYI:  our email today to Samsung's Executive Management contacts:

 

 

Dear Samsung,

I'm a very happy user of 4 x Samsung model 840 Pro SSDs.

I assembled a RAID 0 array that does not support TRIM, however,
because that RAID array is managed by a Highpoint RocketRAID
model 2720SGL RAID controller.

When helping other SSD users at Internet forums, the same question keeps coming up:

What combination of hardware and software is required
to guarantee TRIM support with Samsung's SSDs in a RAID 0 array?


It appears that an Intel Series 7 chipset must be managed
by the latest version of Intel's RST (Rapid Storage Technology).

Samsung's White Paper here is excellent:

http://www.samsung.c...equirements.pdf


Nevertheless, it would be great if you could identify the group of specialists within Samsung
who can answer routine questions from customers who are less experienced than myself.

Many thanks, in advance, for your professional assistance.

Keep up the good work!


p.s.  If you can schedule the time to review another matter,
these next measurements "After 30. Min. Idle" do show a LOT of
variation among modern SSDs BEFORE the TRIM command is issued;

this is extremely important for all RAID 0 arrays
controlled by third-party controllers that do NOT support TRIM:

http://www.xbitlabs....nh_5.html#sect0


Samsung's SSDs would be more competitive
if they also scored higher "After 30 Min. Idle"
and before TRIM i.e. closer to the top scores
measured with Plextor's M5 Pro 256GB SSD.


 


#19 jonnyz2

jonnyz2

    Member

  • Member
  • 19 posts

Posted 09 December 2013 - 05:31 PM

 

I use RAID 10 with hardware RAID cards (Adaptec 6805E) with Supermicro 5-in-3 hotswap enclosures.  I could also configure a hot spare, which the controller would automatically spin up and rebuild to in the event of a failure, with no user intervention needed.

 

If a drive fails, the hardware card makes an earsplitting alarm noise.  If the hotswap enclosure fan fails or starts to overheat, the enclosure makes an earsplitting alarm noise and flashes red lights.  You can also configure the RAID software to notify you by email of any problem.

 

I use RAID 10 because it is simple, has fast write performance and excellent fault tolerance.  A RAID 10 array can withstand multiple drive failures, and rebuilds only take a few hours (no parity to calculate).  RAID 10 is also ideal for working files (like a photographers) because it has high write speeds.  Decent write performance on RAID 5 or 6 requires a battery backed cache, and such controllers are far more expensive than the entry-level RAID 10/1/0/1E capable controllers.  I think RAID 5 should be avoided, and that RAID 6 only makes sense for dedicated file servers.

 

I would recommend a hardware controller over Intel's built in RAID.  Intel RST isn't bad, but it is far slower in my experience.  With a hardware controller you can move the whole array to another machine in the event of a failure, all the functions are OS independent, and you get perks like onboard failure alarms and much higher quality SFF-8087 cable connectors.

 

RAID is not backup, of course.  I use Crashplan to perform cloud backup of my most important data.

 

dietrc70, thanks!

I didn't know you could have a hot spare on Raid 10.  Is the special enclosure required for that?  If not, the case I already have has space for 6 drives; so, I could configure it in an existing case without a new purchase. 

 

The one thing I would not get is an audible alarm for the fan; however, there are software tools to measure temps in the case and HDD temps that should suffice (as long as I'm at the workstation and NOT using it as a remote NAS). 

 

I had been under the impression that Intel's raid controller and software was not OS specific.  Is that incorrect?  If you could tell me more about what is OS specific I would appreciate that.

 

By the way, Crashplan is the vendor I'm considering for backups as they offer a seed drive service.  Are you happy with them?

 

Thanks!


#20 dietrc70

dietrc70

    Member

  • Member
  • 104 posts

Posted 10 December 2013 - 12:13 PM

You can have a hot spare for any kind of array if the controller supports it, and I think even the Intel RST does.

 

According to an Intel whitepaper on RST in Linux, the Intel RAID bios checks arrays and lets you set them up, but then passes all control to software drivers once the OS boots--so it is a hybrid--mostly software RAID solution.  The RAID metadata seems to be OS independent, but the OS drivers still have to handle the RAID calculations and direct the drives individually.

 

I've been very pleased with Crashplan.  I have about 1.4TB backed up using their unlimited single user plan.  I have a good internet connection, so I just took my time selecting important directories for automatic backup.  I've been able to restore older versions of files easily when I deleted things by accident.

 

Their client software is very good.  It doesn't draw attention to itself but is still easy to configure or monitor.  Most importantly, it is smart enough to do block-level updates.  If you tweak a 1GB Photoshop file it only takes a few seconds to update your changes.  It doesn't try to reupload the entire thing.  By default, it saves every file change so you can recover earlier file versions if you want.  You can download the client and test out one of their trial/free programs to see if you like the way it works.


Edited by dietrc70, 10 December 2013 - 12:14 PM.

#21 jonnyz2

jonnyz2

    Member

  • Member
  • 19 posts

Posted 10 December 2013 - 12:16 PM

So, here are the drive choices that seem to make the most sense:

 

  • Toshiba  PH3300U-1I72, 3TB, Best reviews, best reported read/write performance (180MB/s), $118 delivered, 3 year warranty
  • WD Red WD30EFRX 3 TB NAS Drive, So/So reviews.  145MB/s, $147 delivered, I believe these are 3 year warranty
  • Western Digital Enterprise RE4-GP, 2TB, so/so reviews, 150MB/s, $120 delivered, 5 year warranty.  Cost wise, I would need to buy twice as many to meet storage needs in the near term

 

Thoughts/advice?


#22 dietrc70

dietrc70

    Member

  • Member
  • 104 posts

Posted 10 December 2013 - 04:54 PM

Of those drives, the Toshiba looks most appealing for price and performance.  I haven't bought Toshiba drives in long time but they tend to be pretty good.  The only question is if they work well with RAID controllers.  Many desktop drives do, but they aren't tested and optimized for the purpose so it's a bit of a gamble to see if the RAID controller "likes" them.

 

You should also look into the WD Se line.  They are a bit more expensive but are both very fast (unlike the Reds) and fully RAID qualified, and they have 5 year warranties.  The main difference between the Se's and the Re's seems to be that the Re's are designed for heavy use in servers.  For a workstation, the Se's are probably perfect.

 

I built my RAID arrays slowly by buying one or two WD RE's at a time.  I started with one 500GB when they were new, liked it, then bought another and mirrored them with the Intel RST RAID, and four years later had six of them in RAID 10 on a hardware controller.

 

Considering your need for both speed and reliability, I would lead towards Western Digital Se's, Re's, or Seagate ES.3's.  You can use them for a long time and start small and slowly grow the size of the array as your budget allows.  You can also be sure that they will work with a hardware controller if you upgrade to one later.


Edited by dietrc70, 10 December 2013 - 04:56 PM.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users