Search the Community

Showing results for tags 'RAID'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Storage Hardware
    • SMB/Enterprise Storage
    • RAID Cards and HBAs
    • Solid State Drives (SSD)
    • Hard Disk Drives (HDD)
    • Home Storage and Computer Hardware
    • What Drive Should I Buy?
  • StorageReview News, Reviews and Projects
    • Storage News & Reviews
    • Enterprise Test Lab Reviews/Projects
  • General
    • Site Suggestions & Help
    • Storage and Tech Bargains
    • StorageReview Marketplace
    • The Bar & Grille
    • Reliability & Performance Drive Databases

Calendars

  • Community Calendar

Found 34 results

  1. Background I'm building a NAS (FreeNAS, Xeon E5640, ~4GB ECC) to replace my Synology 1212 (single 2GB drive) with a more redundant and fast system. Primary usage will be ensuring integrity of data that's rarely used but needs to be immediately accessible (will have offsite backups as well as FreeNAS/ZFS snapshots), with an occasional need to transcode for Plex. The system will have SSD-based SLOG/ZIL and L2ARC, and I will upgrade those in size so they should encapsulate commonly-used files. Thus for the spinning portion of the system it is more important that it be reliable than that it be an extra 10% faster. My plan is for either RAID-Z1 or RAID-Z2, with 2TB disks. That more than doubles my current storage needs (which have been stable for several years), so I can put the extra money into redundancy rather than capacity I won't use. The Question Given these goals (reliability > performance but with terrible performance being bad), what drives should I get? Lightly used vs new Heterogeneous vs identical models but with different batches vs identical models purchased from the same place Aiming to keep it to $50/drive shipped or less, preferably more like $40. Any recommendations for specific model numbers or brands or lines would be helpful as well. My habit in the past has been to buy whatever's cheapest from a major manufacturer, and that's served me just fine but trying to be a little more thoughtful this time. Thanks!
  2. Hi, I have LSI Syncro CS 9271-8i installed and I found small issue here: I created a Node A and it has a shared Virtual Disk (VD). Now, when Node A reboots, the VD is being transferred to peer Node B and the VD will stay in Node B - no matter node A is up or not. Is this normal? Any expert here knows how to transfer VD from Node B back to Node A when A is online? Below is the information --- I would appreciate it if anyone here could give me some feedback. Thanks. #/opt/MegaRAID/storcli/storcli64 /c0 show Generating detailed summary of the adapter, it may take a while to complete. Controller = 0 Status = Success Description = None Product Name = LSI Syncro CS 9271-8iQ Serial Number = SV40421776 SAS Address = 500605b008929d30 PCI Address = 00:85:00:00 System Time = 06/25/2017 00:24:34 Mfg. Date = 01/23/14 Controller Time = 06/25/2017 00:24:33 FW Package Build = 23.6.1-0018 BIOS Version = 5.43.00.0_4.12.05.00_0x06000500 FW Version = 3.330.05-2793 Driver Name = megaraid_sas Driver Version = 06.811.02.00-rh1 Vendor Id = 0x1000 Device Id = 0x5B SubVendor Id = 0x1000 SubDevice Id = 0x927B Host Interface = PCI-E Device Interface = SAS-6G Bus Number = 133 Device Number = 0 Function Number = 0 Drive Groups = 2 ... /c0/v0 : ====== --------------------------------------------------------------- DG/VD TYPE State Access Consist Cache Cac sCC Size Name --------------------------------------------------------------- 0/0 RAID0 Optl RW Yes RWTC - ON 558.406 GB --------------------------------------------------------------- Cac=CacheCade|Rec=Recovery|OfLn=OffLine|Pdgd=Partially Degraded|Dgrd=Degraded Optl=Optimal|RO=Read Only|RW=Read Write|HD=Hidden|TRANS=TransportReady|B=Blocked| Consist=Consistent|R=Read Ahead Always|NR=No Read Ahead|WB=WriteBack| AWB=Always WriteBack|WT=WriteThrough|C=Cached IO|D=Direct IO|sCC=Scheduled Check Consistency PDs for VD 0 : ============ ------------------------------------------------------------------------------ EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp Type ------------------------------------------------------------------------------ 10:0 16 Onln 0 558.406 GB SAS HDD N N 512B ST3600057SS U - ------------------------------------------------------------------------------ EID-Enclosure Device ID|Slt-Slot No.|DID-Device ID|DG-DriveGroup DHS-Dedicated Hot Spare|UGood-Unconfigured Good|GHS-Global Hotspare UBad-Unconfigured Bad|Onln-Online|Offln-Offline|Intf-Interface Med-Media Type|SED-Self Encryptive Drive|PI-Protection Info SeSz-Sector Size|Sp-Spun|U-Up|D-Down|T-Transition|F-Foreign UGUnsp-Unsupported|UGShld-UnConfigured shielded|HSPShld-Hotspare shielded CFShld-Configured shielded|Cpybck-CopyBack|CBShld-Copyback Shielded VD0 Properties : ============== Strip Size = 256 KB Number of Blocks = 1171062784 Span Depth = 1 Number of Drives Per Span = 1 Write Cache(initial setting) = WriteBack Disk Cache Policy = Disk's Default Encryption = None Data Protection = None Active Operations = None Exposed to OS = Yes Creation Date = 22-06-2017 Creation Time = 02:46:01 AM Host Access Policy = Shared Peer Has Access = YES VD GUID = 4c5349202020202000105b0000107b9269897c46660b80f9 Emulation type = default Is LD Ready for OS Requests = Yes SCSI NAA Id = 600605b008929d3020ddebe9760e1fec
  3. Dell H730P

    There is some Dell H730Ps on ebay and they have very, very nice price. These are based on LSI 3108, have 2GB cache and built-in BBU. So, are they compatible with non-Dell motherboards? Crossflashing to genuine LSI seems impossible for this cards, but i'm ok with Dell firmware. Anyone had any experiences with these adapters?
  4. Just a suggestion for an extra thing to look into when next benchmarking a SMR drive. I know everyone says "don't use SMR in RAID" because... Bad rebuild times - such as the ~57 hrs vs ~20 hrs taken from the Seagate 8TB archive drive review. But what if you built a SMR based RAID array but with the use of PMR drives for use as rebuild drives. Now with no technical knowledge here, this would seem to fix a few issues (and may cause new ones??). Pro: * Cheaper through use of SMR drives for initial RAID ?? * Short rebuild times - in line with PMR drives Con: ?? * Potential for decreased future RAID performance due to mixed drive composition?? Thus I think it'd be great if we could look at testing this in the next SMR drive review? Say initial build RAID 5 x 4. Whack a bunch of data on it, test performance, yank a drive and rebuild with SMR / PMR, retest performance and compare. Also compare to RAID 5 with PMR drives with PMR rebuild (the standard setup). The potential of lowering entry cost could mean an increased size of RAID for many people without blowing the budget whilst being protected against long rebuild times? Thoughts? (Happy to be wrong... just would like it if it was proven and not just because...)
  5. Hello All! Fairly new to things here and still trying to get my footing. When i started at my workplace previous admin set up 3 HP Proliant DL360 Gen 9 Esxi 6.0 servers. At the time budget was an issue so they put 8 - 300GB 10K SAS Hard Drives in. VM's were set up on the three servers with one dedicated to Veeam Backup. Now space has become an issue and we have purchased 8 1TB 7.2K SAS hard drives to swap out. I am having difficulty finding information online for the procedure to swap out all the drives. I believe the Veeam setup information is stored on the 100GB RAM and that just all the VM Backups are stored on the Drives. I want to be able to do this without minimum time down and without losing any data. Thanks in advance!
  6. Hi, i am working on a server and wanted to know that which raid I should choose for my 5 SSD sas drives. is raid 5 sufficient for SSD drive or raid 6? does raid 10 make any difference on SSD or only suitable for 10k -15k drives? thanks
  7. This project is actually two years old, but despite the aging SATA standard, still current as an idea. I invite you to my shop for a short film of the "Do it yourself" series.
  8. Hello all, I hope I can ask a question about my 3Ware 9550SXU-12 SATA controller. Let me just say after purchasing it I quickly realized I should have looked for a newer card. Here's my setup: Dell Poweredge 2800, 9550SXU-12 SATA controller with 9 drives connected (3 are 3tb drives, the rest are 1tb). Since there is data already on the drives I had to set them up as JBOD because 'Single Disk' mode will wipe the drives. (I know JBOD sucks and I will be updating my drives over time and turning them into 'Single Disks' instead) I knew with the 3tb drives I would have to turn on Auto-Carve to use them, but that doesn't seem to be working as I still see them as 746gb. The main issue is with the 3 (3tb) drives I have. One of the three is recognized in Windows just fine. The other two are seen as RAW and not accessible at all. (That's why I think auto-carve isn't working because I should have 4 drives instead of 2) The drives are as follows: Working: Toshiba HDWD130 (7200 RPM, SATA III) Not working: WDC WD30EZRX-00DC0B0 and WD30EZRX-00SPEB0 (Both 5400 RPM, SATA III) My question is would RPMs on the drives be an issue that might cause something like this? Other than the manufacturer, that's the only difference between them. Any help on this would be appreciated immensely Russ K
  9. Sorry this post was added by mistake and I don't know how to delete it.
  10. Hi all, I recently bought a used IBM X3650 M4 for testing and development. This server ships with a built-in RAID card, the ServeRaid m5110e. I'm using it in JBOD mode on all disks (2 x HDDs and 6 x SSDs) and configured software RAID on Ubuntu using mdadm. I would like to know from experts like you if that controller could be a bottleneck and if I should buy a dedicated HBA card. In other words, how can that controller affect performance against a dedicated HBA SAS/SATA 6Gb/s? Another question: a 12Gb/s HBA controller will improve performance even with 6Gb/s SAS/SATAIII disks? Thank you very much Pietro
  11. HBA's CPU usage

    Hi guys, yet another lost soul in the pits of RAID, however I've been looking for precise numbers rather than notions on the internet but it's hopeless, here's my case environment : 5 PCs, all running Windows and all 10Gbe network 1 for backing up the projected RAID server, and the other 4 as workstations, of which only up to 2 will be accessing the sever at a time, however traffic is gonna be around 1.5-3 TB per day and 1 day can't get on top of another, or I'll be in trouble ok, HBA's are the hardware, already bought, in shipping now, 2X LSI 9300 8i's, those are gonna be living in an LGA 2011-3 board, 1 having a RAID 10 pool and the other a RAID 6 pool, both pools made of 8X WD 8TB Reds each note that the server will not have any VMs or any app running for that matter, other than serving those workstation, and FAST, the file sizes range from multiple hundreds of Kb sized files to many files in the 70-90 Gb each, so it's all over the place also note that since we own a Windows server 2012 license, we'll be using that given the ammount of data going through, no SSD caching will be implemented, hot data tracking being completely useless in this case and now my actual question, since those 2 HBAs are gonna be linking the HDDs directly to the CPU in a way, and there are gonna be 2 raid pools of 8 HDDs each, 1 RAID 6 and 1 RAID 10, and all the calculations are gonna happen on CPU, which has nothing else to do, Riddle me this : HOW MUCH IN NUMBERS is this type of setup gonna use CPU say for RAID 6, PLUS dealing with 2 aggregated 10 Gbe links ? all I can find on the internet is "an ROC will offload the parity calculations from the server CPU" and NOWHEWRE does it say HOW MUCH IS THAT IN NUMBERS ??? NOT ONE REAL WORLD EXAMPLE or ANY theoretical system, please note that I can't sell the HBAs and get ROCs, this is not my choice however I'd like to complete this project the way it's going ... basically my question is will a 5820 do ? or will I have to go E5-2699 V4 so to speak, or actual ... as far as ram is concerned, I know this is Windows so depending on the CPU chosen, probably as much memory as it supports will be installed the CPU is not gonna be doing anything, so if this setup will be using 12 or 24 or 72% of this or that CPU, one can have an accurate prediction of how things will go, but installing a CPU and seeing constant 95%+ use while writing files isn't optimal ... I'm sorry but nowhere am I seeing something like, "I have windows server 2012 running on [this] sever, and my HBA connected 8 drive raid6 is using XX% of this or that CPU", NO numbers to be able to begin to figure this out
  12. Specing out a new server for a client who has requested 2.5" NVME drives from Intel. I'm struggling to understand how fault tolerance is achieved without a traditional RAID controller. I get the impression that I'm missing fundamental to this technology. Can anyone shed some light?
  13. Hi all, I work for a small sized (but quickly growing) school district and we have some aging hardware that is in need of some love. From a processing and network viewpoint these servers should still have plenty of life left in them (for us at least). Rather than buying all-new servers, the thought was to put in some SSDs and RAM to breathe some new life (and performance!) into them. However, after doing some research it looks like the RAID controllers currently installed only take up to ~12GB SSDs (~36GB after a firmware update), so that brings a few questions to mind: 1) Is this size limitation only if using SSDs as cache drives? Would they recognize modern SSDs as 'normal' hard drives with no such size restriction? 2) Is TRIM still an issue with SSDs? or do modern controllers pass TRIM commands? Or does it depend on the specific hardware in use? 3) If I need to replace the controller, what would be a good make/model? I have only ever used onboard Intel RAID, or whatever RAID card comes with a server... I am a little green in this area. They would need to control up to 8 physical drives (2 arrays) each, and I believe the current controller is in a PCIe2 8x slot on the motherboard (will verify tomorrow). 3) Would we be able to get away with using relatively cheap consumer grade drives (like Samsung 850 Pro) instead of straight-up SAS HDDs or SSDs? Our write load is not very high, mostly read operations on databases and running several lightly used VMs on each box. 4) I have typically used RAID6 (or equivalent RAIDz2 or RAID5+1) in servers up to this point so we can take up to 2 drive failures. However in doing a little research everyone seems to think that RAID5 is perfectly acceptable when using SSDs (Intel's website specifically suggests NOT using SSDs in RAID6 and to use RAID1 or 5 instead). Is this generally true? Or should I still be looking at a RAID6 setup for redundancy? 5) My first thought is to make the system drive on each box a RAID1 with 2 SSDs for performance and redundancy... but while that makes sense on a desktop computer, would that affect anything other than the boot time on a server? These are all on battery backups, so they don't shut down often, and boot time is really not a priority. Should we save the money and buy HDDs for the boot drive? Other potentially important info: There are basically 3 servers I am looking to upgrade. Server 1 is for file shares and will just have a bunch of ~1.5-2TB HDDs (server takes 2.5" drives) for the data drive. Performance is not such a huge issue here, the big concern is in bulk storage and redundancy. SSD use here would only be for the OS drives (RAID1) if it would offer any real-world benefit. Server 2 is going to be a HyperV box (nothing against VMWare... we just have more experience using HyperV and are less likely to break it lol). This will hold the VMs with the databases on it, and I would like to put in all SSDs. If we can use high-end consumer SSDs then I would like to put in 4-6 drives in a RAID5 or 6. If we have to use SAS drives then I might just buy 2 larger (512GB) ones and put them in RAID1 Server 3 is going to be another HyperV box for our more pedestrian VMs (print servers, DCs, applicaiton servers, controllers, etc.). First thought is to just buy new HDDs and be done with it... but if we can use something like the 850 pro SSDs then I would like to make Servers 2 & 3 identical. Depending on when this project is complete these servers will either be running Server 2012r2 or 2016. If you need more specifics (make, model, etc.) I can look that up when I am in the district tomorrow. These IBM servers all take smaller 2.5" drives instead of normal HDDs I don't have a specific budget yet, but we are probably looking at $5K or less (preferably much less if I want the district to agree to it lol) in total upgrades to these boxes. That includes drives, controllers, ~100GB of RAM, etc. When I am done I am hoping to consolidate 14 physical servers strewn about the district into 5-6 boxes total. Should be a fun project Thanks for your time everybody!
  14. The LSI MegaRAID 9361-8i SAS3 RAID controller is a 12Gb/s storage adapter featuring PCIe 3.0 host interface, 1GB DDRIII cache memory, 1.2GHz PowerPC 476 dual core 12Gb/s ROC, and 8 12Gb/s SATA+SAS ports. With twice the maximum data transfer rate of 6Gb/s SAS solutions, the 9361-8i delivers enough bandwidth to fully saturate the PCI Express Gen3 bus. Along with superior data transfer rates, the 9361-8i offers enterprise-class data protection and security and supports CacheVault flash cache protection. LSI first-to-market 12Gb/s SAS solutions, such as the 9361-8i, are geared for the performance and security demands of next generation of enterprise data storage. LSI MegaRAID SAS3 9361-8i Review
  15. Hey folks! I'd like to kill two birds with one stone and upgrade my OS from Windows 8.1 to Windows 10 (with a clean install) and "freshen" my SSDs at the same time. I have 4 x 256G Samsung 840 Pro SSDs in hardware RAID 5 (LSI 9750-8i) and it was quite fast and responsive when I first installed my OS but now it's kinda... meh... As we all know TRIM isn't supported in hardware RAID and GC isn't as efficient. Supposedly, Samsung's GC runs when system is idle and people have said for it work you need to have the system running but logged out (as oppose to just locked) but I have my system on 24/7 and only locked when away... that's a lot of degradation in performance. Anyway, I'd need to know if anyone with experience with SSDs and hardware RAID recommends any certain action to be performed prior to re-installing an OS, other than a straight-up formatting. Something to return each individual SSD to the original "new" state or as close to it as possible. Thank yoo! E71
  16. The prices of older SATA SSD have really gone down now. I have a RAID-1 of 2x 3TB HDDs as scratch storage for Steam games, mp3s, ripped movies etc I am not even getting close to 1TB - lot's of games installed that I haven't played for a couple of years so I could go even lower, but 1TB is a convenient size where I won't have to worry about what to keep. I am really tempted to get 2 x 'cheap' 500GB drives and RAID-0 them. Currently 2 480/500/512GB drives are actually cheaper than a single 1TB one and you get better performance (and way better than HDDs) I'm not too worried about losing data as I can re download most of the data and for the rest it is backed up in 3 other places. Altogether it seem like a no-brainer to go ahead and do it. Any downsides to this? Obviously with m2 SSD already available this is not very future proof, but to get advantage of m2, I would need to change the motherboard and the CPU etc so not really feasible.
  17. MB Bios update - now PC wont boot with areca 1260 plugged in I flashed my Gigabyte x99-ud4 motherboard bios a couple of days ago and since then my system wont boot with the Areca 1260 plugged in. It wont display any output to monitor and is non responsive to any keystroke. Ive tried: Rolling back to the origonal BIOS (F6) Upgrading to the latest BIOS (F12) And everything in between (F6-12) Clearing the CMOS Tried swapping the raid card to the first PCI-e slot (in case it had issues on a x4 slot) Removed the BBU Im looking for another stick of RAM to test with and another system but am starting to get desperate (16tb offline until I can fix this) Any help would be much appreciated System Gigabyte x99-ud4 rev1 (Current BIOS F11) WIn 8.1pro (installed on revo 256gb pci ssd) areca 1260 6x4tb in RAID6
  18. Hi all, Thanks for this great forum. I have a pretty common scenario - 14+ years of family data with a spotty backup/sharing strategy, looking to do it right finally. Here's what we have to work with. I'll give as much detail as I can just in case it is useful. 'client' hardware * MacBook Pro (my business laptop) 512GB SSD * MacBook Air (new, wife/kids/homeschool) 256GB SSD * Various iOS devices 'server' * Mac Mini (media server in utility room) 256GB SSD (new, replaces an old Mini) - used to stream video to Apple TV and other iOS, music to various AirPlay, and as an Internet-accessible family web server storage * USB2 1.5TB HDD * USB2 1TB HDD * USB3 2x4TB RAID1 The most valuable data we have is digital photo/video from SLR's and iOS devices going back 14 years. What I currently have is: * MacBook Pro backup via Time Machine to the USB3 2x4TB RAID1 * Current year photo/video on MacBook Pro system drive (backed up only by virtue of Time Machine) * Photos/Videos up to current year stored solely on the USB3 2x4TB RAID1 * Old Mac Mini backup via Time Machine to the USB2 1TB HDD Issues include: * No offsite backup * No reliable way to access photo/video archive (we make family movies for the kids' birthdays that sometimes include past years). The MacBook Pro is the designated "editing station". * The RAID1 is both Time Machine and sole storage location for a priceless photo/video archive To resolve these, here is what I am thinking: * Purchase a fast (Thunderbolt 2?) DAS for photo/video editing station - Which one? Considering LaCie 5big, Promise Pegasus R4/R6, and OWC ThunderBay 4. I have seen you guys recommend buying empty chassis and HGST enterprise grade HDD's. Is that still the recommendation? - Which RAID level? RAID5 to balance performance and some semblance of safety, or RAID0 for maximum performance then clone to a second identical unit as a backup?? HW or SW RAID? - What capacity? - Do not quite understand yet what exactly is backed up from this and to where. Just the raw source files? * Maybe centralize the USB3 2x4TB RAID1 on the Mini and use it as a Time Machine backup of the Mini, MacBook Pro, and the MacBook Air as well as a central place to put business/tax/legal documents. Maybe separate partitions? * Use the 1TB or 1.5TB USB2 as a clone of the boot drive on the MacBook Pro * Purchase a cheap ~$30 USB3 HDD dock and 2 or 3 4TB-or-so HDD's to connect to the Mini and use as a rotating offside backup - what will go on this? from where and can it be automated somehow? - I think this should be business/tax/legal documents, the photo/video library, and ?? I know these solutions potentially have a significant cost, but I feel like I have a 'debt' of years of putting this off, so I want to pay the debt and move forward with something we'll feel good about. Thanks in advance for your help guys! Patrick
  19. [Apologies if this seems misplaced. I don't see a forum on RAID configuration.] I'm configuring a RAID on SSDs. It happens to be 3 drives in a RAID 5, but this is a fairly generic question. I had the idea that I could reduce stripe read-modify-write operations and write amplification by using a segment size of 4k (which equates to a stripe size of 8k, in my case). Then, I build the filesystem with a block size that matches the stripe size. The only downside I can see is the overhead of using such a small stripe size, if the controller is too dumb to combine a sequence of 4k reads into fewer, larger reads. The reason I care about performance of small writes is that this filesystem will be used for software builds, among other things. This involves frequently creating large numbers of small/medium-sized files. From what I can tell, this isn't a very common practice, but I suspect the tendency towards large stripe sizes is a legacy of mechanical disk drives and simple controlers. My RAID "controller" is Linux software RAID (mdadm). Any thoughts?
  20. I have a LSI MegaRAID SAS 9260-16i RAID controller. Somehow it's become corrupted and now it the machine won't load the card on startup. The following message appears: On-board expander FW or mfg image is corrupted. Flash expander FW and mfg image using recovery tools. This is before POST has completed so I can't boot the machine, load the BIOS or load the WebBIOS in order to try and flash the firmware. It's running the latest version of the firmware (2.130.403-3066, package build 12.15.0-0189). I've tried booting the machine with a good RAID card with the corrupt on in a second PCI but it still fails to load the corrupt card. Any help appreciated!
  21. We're setting up a system that needs to be capable of writing 2GB/s to disk. We have 9 SSD's, one with the OS and programs installed, and then an 8 Samsung 840 Pro SSD Raid 0 Array. The raid is hooked up the an Adaptec 8805 raid controller that is supposed to be capable of 12Gb/s per port. I've been struggling to get this system to have write speeds better than about 1850MB/s which is shy of the desired 2GB/s. Is there any ideas on what I could try to optimize my write speeds Thanks
  22. Hi, I recently bought an Alienware area51 desktop (or is it Aurora?) from the Dell outlet. It had all I wanted from it except it is lacking in the HDD department with a single Seagate 1 TB 7200 RPM disk. I figured it would be easy to upgrade to 4 3 TB Seagates which I also bought but haven't installed yet. I'm a bit short of time and would like to install them the fastest and easiest way possible. I tried putting in one of the 3 tb disks and changing in the bios boot screen the hard drive mode to rAID but the computer won't boot. My idea was mirroring that one and then removing the 1 tv and setting in another of the 3 gb drives to make the first 3 gb drive 3 gb again, and then finally the other two. I just would really like to skip making a backup of the boot drive and just copying it over... Any tips? Something I can download or buy (cheap) to do this? Thanks, Xair
  23. LSI MegaRaid - Image Drive from Failed RAID I had a power failure and my LSI MegaRaid 3 disk SAS RAID 0 failed. My attempts to recover the RAID have also failed. I plan to rebuild the array and go with RAID 5 and add another SAS disk. However, before I wipe the drives I would like to image each of them separately. I tried to boot up using a Linux live boot CD. I can boot up but I can't see my drives. I tried just one drive plugged in and booting but the drive can't be seen so I can't call the imaging command. I'm assuming since the MegaRaid SAS controller says that the virtual drive is bad, then it will never mount. I tried to find a SAS to USB cable online so I could just plug in each drive and image them but I can't find such a product. I thought maybe I could use the MegaRaid controller with one drive plugged in and set it as a new Raid 0 so I could get it to mount. However, it seems to want to call the initialize command and want to wipe the drive. Losing the Raid tables for the original Raid wouldn't be a problem, but I don't want the data to be erased. Any suggestions on how I could image each drive? Thanks.
  24. Hi all, This is my first post in this forum, I hope someone can lend me a hand since now I have get out of ideas. I've built a raid 5 on a Asrock z87 extreme6 using six Western Digital RED 4tb that are connected to the six intel controller SATA3 ports, with the aim of creating a 20Tb Raid 5. The OS is Windows 8.1 x64. I created the raid from the BIOS utility selecting 64kb size (I had option of selecting 64 and 128 but the utility reccomended 64kb in case of raid 5). Once in Windows I formatted the raid unit with a 20Gb partition and write speed was really slow (10 MB/s max), even after waiting raid to be completely constructed (it took several hours) After reading and looking for information I enabled write cache and disabled write-cache buffer flushing. I also set simultaneous on in the Intel Rapid Storage Technology panel. After doing this the write speed increased to 25-30 MB/s. I have notice that physical cluster size is 4098 bytes (usual on those 4tb disks), but logical cluster size is 512 bytes: Shouldn't those cluster sizes match to have a good performance? In this case, how to change it? I've try to delete partition and create again, but selecting different cluster sizes for que partition, and the best performance is using 64kb (the stripes size), but it's only 50-60 MB/s actual speed copying a big MKV file from an SSD, and even doing it if doesn't makes any change on the capture where we see the 512 bytes for logical sector size. AS SSD Benchmark seems to tell that partition is correctly aligned: The results of the speed here seems ok, but as I told, real speed never exceeds 58-59 mb/s in writes. I attack a capture of fdisk, I really don't know if it's ok or bad aligned: ATTO DISK Benchmark: Those 6 discs were installed on a NAS, having a write speed higher than 80 mb/s, where is the problem here? Many thanks in advance
  25. I need a two port SATA RAID controller for a pair of SSDs with only one requirement: mirror two OS & applications containing SSDs or HDDs and not crash the OS when one fails. Reliability and price are the main considerations. Hot-swapping is not a requirement (overkill), neither is speed an issue because system is already more than adequate to the task. Thanks.