Jump to content


Photo

Expanding my Desktop Storage


  • You cannot start a new topic
  • Please log in to reply
6 replies to this topic

#1 SAC

SAC

    Member

  • Member
  • 3 posts

Posted 02 May 2013 - 03:50 AM

Hi Forum

My first post here, so please be gentle. I'm quite experienced in technical matters (so I'll more than likely understand any replies that may be posted), but I don't have the knowledge that I need for this issue so I'm seeking help.

I'd like to increase the size and speed of the persistent storage in my home desktop (W7 64-bit Ultimate and likely to remain so) that is used for all purposes including digital video editing and some gaming. I also have concerns that the disk controller on my MB may be developing a fault. I might want to use RAID-0 to achieve better space utilisation on the video editing drives, i.e. to create a single very large volume (I'm aware that there will be little or no speed benefit from doing this).

I currently have the following storage devices inside my desktop:

1 x OCZ Vertex-2 240GB#
5 x WD Velociraptor 600GB

and various other NAS and USB drives external to the desktop. I have an ASUS Rampage III Extreme MB and therefore have only 2 SATA III and 4 SATA II ports. I'd like to have all of my drives connected as SATA III, although I am aware that (i) my current SSD is only SATA II and that (ii) my HDDs will probably not see much benefit as you can only scrape so many bits per second of a disk even if it is spinning at 10k. I'd also like to discontinue use of the disk controller(s) on my MB as there could be potential problems with them. I am likely to add a new SSD or two once I have some spare ports available.

I have a Coolermaster Cosmos 1000 case, a Corsair 1000W PSU and an NVIDIA 480GTX GPU. So, I think I have a slot free for a card and sufficient power and bays to accommodate a little more storage.

The questions are ...

Is it possible to transfer the control of all of the storage (including the boot drive) to some sort of card?
Will this provide improved performance, or will there be a bottleneck generated where the card attaches to the MB? 8 SATA III disks could generate quite a bit of traffic.

Any help appreciated!

Simon

#2 dietrc70

dietrc70

    Member

  • Member
  • 104 posts

Posted 03 May 2013 - 05:16 PM

You might consider one of the entry-level hardware RAID cards like the Adaptec 6805E. They are reasonably priced and support RAID 0,1, and 10. Since they have their own BIOS, they can replace your internal SATA entirely if you wish. (FYI, enterprise-level RAID cards boot slowly and test all drives on startup--they are designed for 24/7 operation on servers and workstations where boot speed is not a consideration.)

RAID 0, especially on a hardware card, will give you a considerable sequential read/write boost. Video editing and large scratch file manipulation are one of the few applications where RAID0 makes sense, as long as you are aware of the greater potential for array failure.

If you added another Velociraptor, you could have a 1.8GB RAID 10 array that would have tremendous read/write speeds along with fault tolerance. That might be worth considering if you would like to simplify your drive utilization.

The 6805E uses a 4x PCIe 2.0 slot. That is plenty of bandwidth (2GB/sec!) as long as you use the right slot.

Be sure to backup your data before transferring drives between controllers. If you define a drive as a JBOD on a RAID adapter and do not initialize it, it should appear as a single drive in the OS with the data intact.

You could also look at simple HBA's (which are not necessarily cheaper) which will simply function as SAS/SATA drive adapters. You could then set up a RAID0 array using Windows' built in RAID0 features.

#3 SAC

SAC

    Member

  • Member
  • 3 posts

Posted 04 May 2013 - 04:27 AM

Dear dietrc70

Thank you so much for your helpful, informative and thought-provoking reply. I now have some further research to do, but I anticipate that at the end of that I will probably add to this post with the steps I intend to take to see if you, or anyone else, can spot any flaws with the approach.

Meantime I am reassured to know that what I'd like to do is essentially feasible, insofar as I can eliminate my existing SATA infrastructure and expand my storage with a potential improvement in performance. I note you are into RAID. I have had some experience with this in a corporate environment, but in a home setting I am somewhat sceptical over the benefits, not least in my environment. I try to buy good quality drives and so I expect failures to be few and far between. I have a very robust backup regime and sufficient spare space around to be able to squeeze the contents of a failed drive onto another volume until I can obtain a replacement and perform a restore. So the actual downtime is not that protracted and I get to store the maximum amount of data on each of my drives. However, I do use RAID-0 to establish large volumes, hence achieving better overall space utilisation and simpler management. I agree that hardware RAID is superior to software and consider JBOD to be less preferable as I'm prepared to initialise the drives and re-load the data from backups.

It seems that there is a limit of 8 ports, so I imagine that I will end up with:

SSD 1 (480GB, new) - OS and programs
SSD 2 (240GB, existing SATA II) - copies (using GoodSync) of multimedia files from one of my NAS drives (for reasons of performance and W7 library indexing)
HDD 3 (1TB, new) - my general data
HDD 4 (600GB, existing)
HDD 5 (600GB, existing)
HDD 6 (600GB, existing)
HDD 7 (600GB, existing)
HDD 8 (600GB, existing)

I'd like to create two RAID-0 pools from HDD 4-8 for use with video editing. One pool (HDD 4,5,6) for source and another (HDD 7,8) for target. I think this will work better than a single pool, as otherwise I believe a bottleneck could be created at the controller on the drive.

Right now, my main worry is whether my MB and case will accommodate the card. The RAMPAGE III Extreme has (according to the manual):

4 x PCIe 2.0 x16 slots (x16/x16, x16/x8/x8 and x8/x8/x8/x8 configurations supported)
1 x PCIe x4 (version/generation not stated)
1 x PCI 2.2

I have an NVIDIA GTX 480 installed in the top slot and that would rule out the use of the second x16 slot. The remaining slots are empty. If the card must have an x4 slot then I have one, but need to find out what version / generation it is.

Or, on reflection, perhaps I could find an x8 or x16 card and use one of the PCIe 2.0 x16 slots. With an x16 card I'd have 8000MB/s bandwidth and an x8 card would provide 4000MB/s. I realise the 600MB/s SATA III capability on my Raptors applies to the interface and that the platters and heads won't be able to deliver anything like that, but 8 x 600MB/s is 4800MB/s and thus 8000MB/s is clearly OTT, 4000MB/s is probably ample but 2000MB/s could be on the tight side. So, maybe an x8 card in a PCIe 2.0 x16 slot would be a nice configuration?

By the way, do you anticipate any issues with TRIM on my SSDs?

Thanks again - much appreciated.

Simon

Edited by SAC, 04 May 2013 - 10:44 AM.

#4 dietrc70

dietrc70

    Member

  • Member
  • 104 posts

Posted 05 May 2013 - 12:05 AM

You're welcome. But you are worrying about something that is simply not an issue.

You will not have a bottleneck at the controller. No platter drive can saturate a SATA II interface, let alone SATA III. The only difference is buffer transfer rate, which shows up on benchmarks but is irrelevant in real world performance.

Not even an eight-drive platter drive RAID0 can saturate a PCIe 4x 2.0 bus. The controller I listed as an example could handle a eight-drive 10K (or 15K) RAID0 array easily, and the only bottleneck would be at the level of the individual mechanical drives (i.e. ~180MB/sec).

The only way you could need more PCIe bandwidth would be if you plan to make large SSD RAID arrays, in which case the latest generation Adaptec 7 series (PCIe 3.0 8x) or the LSI equivalent would be the best choice.

A PCIe 2.0 4x card will work at full speed in a PCIe 2.0 4x, 8x, or 16x slot. The slots don't have to match, the speed will be determined by whichever is slower, the card or the slot.

Unfortunately, TRIM doesn't work on most RAID adapters. You can compensate for this limitation with overprovisioning. If your SATA adapter weren't giving you trouble (be sure to check your cables), I'd recommend leaving single SSD's on the motherboard's native SATA III ports.

(PS--If you think you may need more ports, you might want to look at the 16 port 71605E. I think LSI makes a 16 port model as well.)

Edited by dietrc70, 05 May 2013 - 12:09 AM.

#5 SAC

SAC

    Member

  • Member
  • 3 posts

Posted 05 May 2013 - 05:10 AM

Dear dietrc70

Thanks again for taking the trouble to provide a considered response - much appreciated.
I understand that there is a degree of backward compatibility and inter-operability built into the cards and slots so that speed will be negotiated and set to that of the lowest component. However, I don't think an x4 card will operate in my x16 slots as the MB manual lists the supported configurations and only x8 and x16 cards are specified.
Unless I can determine the generation of my x4 slot - if it is 1st generation that would give me only 1000MB/s - I would feel happier using an x16 slot and hence would need an x8 card.
I think I can live with a limit of 8 ports, as after that space in my case will become an issue. At some stage I'll probably end up upgrading my 600GB raptors to 1TB versions. Maybe I'll be able to build some sort of external enclosure as I don't like to think about throwing these drive away whilst they're still working well.
The big issue now is with the SSDs and TRIM. Your idea of leaving these on the MB is an attractive one as I'd be very loathe to lose the TRIM function and it would also ensure that bandwidth on the card would be a complete irrelevance.
I may have to run some tests to see if I can put my worries about the health of the on-board SATA III ports to bed. Although there have been more serious issues in the recent past, the current situation is only that I see a single mv91xx error [The device, Device\Scsi\mv91xx1, did not respond within the timeout period] in my event log at each boot or resume from sleep. It could just be a consequence of my configuration. My boot times are quite long as Windows seems to need to spin up my 4 WDC MBE drives. Each of these takes a while and they spin up sequentially so the overall wait is significant.
You have helped clarify my thinking enormously!

Simon

#6 ServerStation668

ServerStation668

    Member

  • Member
  • 6 posts

Posted 28 May 2013 - 06:36 PM

Hi Forum

My first post here, so please be gentle. I'm quite experienced in technical matters (so I'll more than likely understand any replies that may be posted), but I don't have the knowledge that I need for this issue so I'm seeking help.

I'd like to increase the size and speed of the persistent storage in my home desktop (W7 64-bit Ultimate and likely to remain so) that is used for all purposes including digital video editing and some gaming. I also have concerns that the disk controller on my MB may be developing a fault. I might want to use RAID-0 to achieve better space utilisation on the video editing drives, i.e. to create a single very large volume (I'm aware that there will be little or no speed benefit from doing this).

I currently have the following storage devices inside my desktop:

1 x OCZ Vertex-2 240GB#
5 x WD Velociraptor 600GB

and various other NAS and USB drives external to the desktop. I have an ASUS Rampage III Extreme MB and therefore have only 2 SATA III and 4 SATA II ports. I'd like to have all of my drives connected as SATA III, although I am aware that (i) my current SSD is only SATA II and that (ii) my HDDs will probably not see much benefit as you can only scrape so many bits per second of a disk even if it is spinning at 10k. I'd also like to discontinue use of the disk controller(s) on my MB as there could be potential problems with them. I am likely to add a new SSD or two once I have some spare ports available.

I have a Coolermaster Cosmos 1000 case, a Corsair 1000W PSU and an NVIDIA 480GTX GPU. So, I think I have a slot free for a card and sufficient power and bays to accommodate a little more storage.

The questions are ...

Is it possible to transfer the control of all of the storage (including the boot drive) to some sort of card?
Will this provide improved performance, or will there be a bottleneck generated where the card attaches to the MB? 8 SATA III disks could generate quite a bit of traffic.

Any help appreciated!

Simon


#7 ServerStation668

ServerStation668

    Member

  • Member
  • 6 posts

Posted 28 May 2013 - 06:37 PM

Here is an overview of the Dell T7600 workstation:



0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users