alpha754293

Member
  • Content count

    2007
  • Joined

  • Last visited

Community Reputation

0 Neutral

About alpha754293

  • Rank
    Member

Contact Methods

  • MSN
    alpha754293@hotmail.com
  • Website URL
    http://

Profile Information

  • Location
    Windsor, ON
  • Interests
    SAE, NSPE, MSPE, ASME, R&D, CFD, FEA
  1. (sorry, I was trying to perform a search first before posting, but I couldn't find where the search button was) So I have server which has the following RAID5 configuration 8x HGST 6 TB 7200rpm SATA 6 Gbps drives LSI MegaRAID 9240-8i (and it's already using the latest firmware) I'm trying to replace the array one drive at a time from 6 TB drives to 10 TB drives (HGST He10 10 TB 7200rpm SATA 6 Gbps), and it's saying that the estimate time to rebuild an array that has no data on it is about 105 hours(!). The rebuild rate on the controller is already set to 100. Why it is so slow? Short of deleting the virtual drive configuration, switching all of the drives in the array from the 6 TB drives to the 10 TB drives, and then creating a new virtual drive; is there anything else that I can do to speed up the rebuild process? Thank you.
  2. Has anybody ever seen this before? (Where cygwin doesn't report the available space on a Windows Storage Spaces pool correctly.)
  3. Gotcha. Good tip. Thanks!
  4. Those are the kinds of things that making me nervous about using it as a backup server because I ran into similiar type of problems with ZFS before with similiar outcomes. (I mean, I haven't had any problems yet with Storage Spaces, but if I know what the risks are, then I can better try and at least plan or protect from them). Luckily, this is going to be the backup server (the primary server is just running straight-up HW RAID5 in a single, monolithic array/volume), so I don't feel so bad if this backup server fails. Thanks. (I'm also curious as to whether I can enable compression on a storage pool (and trying to understand the risks associated with that) and also enabling/running deduplication to reduce the amount of space that's actually consumed (a la ZFS).) Thank you for helping me and answering my dumb questions about it. Like I said, I'm only just now beginning to learn about Storage Spaces.
  5. Here are some more "generic" questions about the details of Windows Storage Spaces and I am hoping someone here would be able to help answer: 1) Is the parity calculation for a parity storage pool multithreaded? 2) How does Windows Storage Spaces know what drives are in a pool? Is it by some kind of GUID from the hard drive that are members of that pool or is it something else? (Like if I were to have a drive fail and swap PCBs, will that be enough to bring the pool back online)? 3) If a pool goes offline (or is in another sort of "downgraded" state), what data recovery tools are available to extract the data off the drives? Any help with these questions would be greatly appreciated. Thank you.
  6. http://www.supermicro.com/products/chassis/2U/826/SC826TQ-R800LP.cfm This is the chassis that it's going in. It came with the $200 dual Xeon system. For low-profile cards? None that I've seen that doesn't cost significantly more. The cheapest of the cards I think that I've found is going to be like $600 (see the Newegg link above). Conversely, I'm looking right now at an 8-port card (http://www.ebay.com/itm/121485896238?_trksid=p2055119.m1438.l2649&ssPageName=STRK%3AMEBIDX%3AIT) that's going for $115 max, figuring that it is very hard for me to try and justify the cost of a $600 RAID HBA just to be able to build a single, monolithic array. I mean, I would like to, but it's REALLY difficult for me to justify it when the host system was only $200. (And the drives were about $113 each when I bought them). And the reason why I posted the question here was because I wasn't sure if there was somebody out there that has experience with a cheaper alternative (than the $600 card) that will be able to meet my requirements (low-profile/half height, 12 ports (3x SFF-8087)). And from all of the research that I've BEEN able to do, it doesn't really look like it, but hence why I figured I'd ask in case there was something that I didn't know about. *edit* And to be perfectly honest, I actually bought the dual Xeon system BECAUSE of the chassis, not necessarily because it was a dual Xeon with 16 GB of RAM and an 800 W power supply.
  7. Well, it depends. The problem I am having is finding a low-profile card that can take at least 12 drives. Most are usually 8 (if they're low-profile cards). It's an optimisation. If it would be better to spend the extra money so that I can build a single monolithic RAID5 array, then it might be worth my while. But I can use either ZFS or Storage Spaces and have IT build the parity volume, with minimal performance degradation, then that can be the more cost-efficient way to go. Like I said, I can get an LSI MegaRAID SAS 9261-8i from eBay for $175. It's low profile, supports HW RAID5, but it's only 8-ports. I agree with you. Although I've had Adaptec before and the one that's currently in the primary server is an Areca ARC-1230 I believe (because my primary server actually has a riser to support full-height cards) and that's been working out pretty good. This time, I don't have the riser, and the chassis isn't set up for it either. That's exactly the dilemma that I am trying to solve/get advice on. This is what I've been able to find on Newegg (as an example): www.newegg.com/Product/Product.aspx?Item=N82E16816151138 It's the Areca ARC-1264IL-12 12-port PCIe x8 2.0 SATA 6 Gbps RAID HBA (with support with RAID5), but it's going for almost $600.
  8. I know that it will start out as being sequential for the initial/first sync, but I don't know how sequential it will be afterwards. I read on one of the MS TechNet blogs that the performance can drop from 140 MB/s down to 25 MB/s if storage spaces is what's building the parity data (which, when you're doing the initial sync up of 20 TB, that's going to take an awfully long time).
  9. Thank you for the person who helped me split this topic. I added it to the other thread only because it was the only thing that I really found that was relevant to the parity storage pool aspect of the discussion. The primary purpose of this server will be to act as a live backup/mirror to my first (30 TB) server. I already have the drives and the 12-bay system already (it's another server, not a DAS or NAS), so really the last piece of the puzzle is what to do with the 12-port SATA RAID HBA that's either half-height or low-profile. I'm trying to keep the cost of the RAID HBA plus any expander cables to around $200 if at all possible. Speed won't be too much of a concern, since I'll likely run rsync or similiar once a week to keep the two servers sync'd up with each other. But it would depend on whether I can actually get said half-height/low-profile 12-port SATA RAID HBA because if I can't, then I would either need to built two RAID5 arrays (one on the onboard controller and one on another controller card) and then stripe them together using Windows Server 2012 R2 storage pool (as a stripped volume) or I would have to just leave it as JBOD (for all of the drives) and then create a ZFS zpool.
  10. Hello. Sorry for jumping into this discussion late, but I'm just looking into Windows Server 2012 R2 for deployment at home (possibly). So what I'm trying to do is to build my second 10 * 3 TB SATA array, but unfortunately, the 2U system that I got only takes half-height cards (2x PCIe 2.0 x8, 1x PCIe 2.0 x4) and there are 6 SATA headers on board, so it looks like that I'm going to have to use the onboard controller and also another RAID HBA. And I only recently learned about Windows Server 2012 R2 storage pools and how it can create a parity pool. Does anybody have any benchmark data on how that performs vs. various HW RAID solution? From the reliability/fault-tolerance perspective, would it be better for me to create a parity storage pool from the Windows Server 2012 R2 (so that I can write the parity data across both controllers) or would it be better for me to create two RAID5 arrays using the controllers and then creating a stripped pool to bridge the two arrays? (I'm trying to optimise between capacity, fault tolerance, and write speed). Any insights or thoughts on this dilemma would be greatly appreciated. Thank you.
  11. Thanks. I thought about using ZFS because then I can enable compression on it and also turn on deduplication as well, so it would save a bit of physical storage space. That was the logic/reasoning behind it. But I do agree with the NTFS though. Such a shame that it doesn't have de-dup natively (although I was reading that I think it was in Windows Server 2012 that it has it or something very similiar to it, but I've not read ALL of the details in terms of how it works/how it does the dedup), so... Thanks for your feedback.
  12. Thank you all for your input. I actually purposely left out a few of the details because I didn't want to contaminate or influence the opinions of the people who have been providing them. So, I'm going to fill in the back story now. My current RAID array is 10 * 3 TB drives on RAID5. They're Hitachi 3 TB drives that I bought a couple of months before the floods in Thailand, not knowing that there was going to be a flood, but because they worked out to be like $114 each per drive, so it was a really good deal. Yayyy eBay. My original intent was to have the first 10 drives (out of a box of 20) to be deployed in my current server and then the second group of 10 to be deployed as the backup; BUT now you might be asking "so why did you ask?" well...I want to make sure that that was still the better way to go and that LTO and BD-DL or BD-XL hasn't replaced it in terms of cost and viability and reliability. I haven't built the second backup server yet because I'm still debating as to whether I want to keep it as NTFS or whether I want to migrate back over to ZFS as the backup server. (Aside from one or two other experimental file systems, those are the only ones that will support a single volume that size.) I actually already have the drives, I'm just waiting to get the enclosure and the rest of the server hardware (motherboard, CPU, PSU) etc. before I can launch and deploy the backup server, but I just wanted to make sure that the initial plan still held its own when tested by other people. And given the feedback, it seems like that hard drives is the way to go, although trying to expand the capacity of either systems would be a little harder, unless I just get bigger drives, but the same number of them. Thank you all for your feedback/input.
  13. So...mechanical hard drives is still the way for me to go then?
  14. What's the most cost effective, cost efficient way of backing up ~ 10 TB of data? The probability that the data is going to change is very small. Would it be better for me to just build another live hard disk array and then power it up only when I want to run rsync or would it be better for me to use some kind of optical medial like BD-DL or BD-XL or would it be better for me to go with something like a LTO-3/LTO-4 solution? Backup speed isn't too critical/too important (as I can likely run the backup job overnight). I'm looking for something that has a relatively low cost of entry (initial capital expenditure) and also a reasonably low cost of maintaining the backup solution (as the volume of data increases over time). Advice/suggestions/comments is GREATLY appreciated. Thanks.
  15. alpha754293

    secure data transfer

    Well, I am hoping that the data will be somewhat live so that as my master copy is updated, so will the colo copy(ies). Would it be better to encrypt the drive or just the archive will be sufficient? Or both? I thought that I read somewhere that the SHA-256 hash isn't as secure as it was once thought to be? Does it matter if it's AES-CBC or AES-ECB? Yea....I'm not really THAT skilled in system administration to be able to set that up let alone teach it to my friend who's NOT a sysadmin. Can you SEND ZFS to another system that's NOT located on the local network? Or rsync over the net? How do you define a remote target for the send/receive commands? Or would it be a ssh/rsh port forward (at the router side of things)? (Forgive me for asking dumb questions) - but just trying to figure out what would be the best way to transmit data securely.