Search the Community

Showing results for tags 'raid'.

More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Storage Hardware
    • SMB/Enterprise Storage
    • RAID Cards and HBAs
    • Solid State Drives (SSD)
    • Hard Disk Drives (HDD)
    • Home Storage and Computer Hardware
    • What Drive Should I Buy?
  • StorageReview News, Reviews and Projects
    • Storage News & Reviews
    • Enterprise Test Lab Reviews/Projects
  • General
    • Site Suggestions & Help
    • Storage and Tech Bargains
    • StorageReview Marketplace
    • The Bar & Grille
    • Reliability & Performance Drive Databases


  • Community Calendar

Found 29 results

  1. Hi, i am working on a server and wanted to know that which raid I should choose for my 5 SSD sas drives. is raid 5 sufficient for SSD drive or raid 6? does raid 10 make any difference on SSD or only suitable for 10k -15k drives? thanks
  2. This project is actually two years old, but despite the aging SATA standard, still current as an idea. I invite you to my shop for a short film of the "Do it yourself" series.
  3. Hello all, I hope I can ask a question about my 3Ware 9550SXU-12 SATA controller. Let me just say after purchasing it I quickly realized I should have looked for a newer card. Here's my setup: Dell Poweredge 2800, 9550SXU-12 SATA controller with 9 drives connected (3 are 3tb drives, the rest are 1tb). Since there is data already on the drives I had to set them up as JBOD because 'Single Disk' mode will wipe the drives. (I know JBOD sucks and I will be updating my drives over time and turning them into 'Single Disks' instead) I knew with the 3tb drives I would have to turn on Auto-Carve to use them, but that doesn't seem to be working as I still see them as 746gb. The main issue is with the 3 (3tb) drives I have. One of the three is recognized in Windows just fine. The other two are seen as RAW and not accessible at all. (That's why I think auto-carve isn't working because I should have 4 drives instead of 2) The drives are as follows: Working: Toshiba HDWD130 (7200 RPM, SATA III) Not working: WDC WD30EZRX-00DC0B0 and WD30EZRX-00SPEB0 (Both 5400 RPM, SATA III) My question is would RPMs on the drives be an issue that might cause something like this? Other than the manufacturer, that's the only difference between them. Any help on this would be appreciated immensely Russ K
  4. Sorry this post was added by mistake and I don't know how to delete it.
  5. Hi all, I recently bought a used IBM X3650 M4 for testing and development. This server ships with a built-in RAID card, the ServeRaid m5110e. I'm using it in JBOD mode on all disks (2 x HDDs and 6 x SSDs) and configured software RAID on Ubuntu using mdadm. I would like to know from experts like you if that controller could be a bottleneck and if I should buy a dedicated HBA card. In other words, how can that controller affect performance against a dedicated HBA SAS/SATA 6Gb/s? Another question: a 12Gb/s HBA controller will improve performance even with 6Gb/s SAS/SATAIII disks? Thank you very much Pietro
  6. Hi guys, yet another lost soul in the pits of RAID, however I've been looking for precise numbers rather than notions on the internet but it's hopeless, here's my case environment : 5 PCs, all running Windows and all 10Gbe network 1 for backing up the projected RAID server, and the other 4 as workstations, of which only up to 2 will be accessing the sever at a time, however traffic is gonna be around 1.5-3 TB per day and 1 day can't get on top of another, or I'll be in trouble ok, HBA's are the hardware, already bought, in shipping now, 2X LSI 9300 8i's, those are gonna be living in an LGA 2011-3 board, 1 having a RAID 10 pool and the other a RAID 6 pool, both pools made of 8X WD 8TB Reds each note that the server will not have any VMs or any app running for that matter, other than serving those workstation, and FAST, the file sizes range from multiple hundreds of Kb sized files to many files in the 70-90 Gb each, so it's all over the place also note that since we own a Windows server 2012 license, we'll be using that given the ammount of data going through, no SSD caching will be implemented, hot data tracking being completely useless in this case and now my actual question, since those 2 HBAs are gonna be linking the HDDs directly to the CPU in a way, and there are gonna be 2 raid pools of 8 HDDs each, 1 RAID 6 and 1 RAID 10, and all the calculations are gonna happen on CPU, which has nothing else to do, Riddle me this : HOW MUCH IN NUMBERS is this type of setup gonna use CPU say for RAID 6, PLUS dealing with 2 aggregated 10 Gbe links ? all I can find on the internet is "an ROC will offload the parity calculations from the server CPU" and NOWHEWRE does it say HOW MUCH IS THAT IN NUMBERS ??? NOT ONE REAL WORLD EXAMPLE or ANY theoretical system, please note that I can't sell the HBAs and get ROCs, this is not my choice however I'd like to complete this project the way it's going ... basically my question is will a 5820 do ? or will I have to go E5-2699 V4 so to speak, or actual ... as far as ram is concerned, I know this is Windows so depending on the CPU chosen, probably as much memory as it supports will be installed the CPU is not gonna be doing anything, so if this setup will be using 12 or 24 or 72% of this or that CPU, one can have an accurate prediction of how things will go, but installing a CPU and seeing constant 95%+ use while writing files isn't optimal ... I'm sorry but nowhere am I seeing something like, "I have windows server 2012 running on [this] sever, and my HBA connected 8 drive raid6 is using XX% of this or that CPU", NO numbers to be able to begin to figure this out
  7. Specing out a new server for a client who has requested 2.5" NVME drives from Intel. I'm struggling to understand how fault tolerance is achieved without a traditional RAID controller. I get the impression that I'm missing fundamental to this technology. Can anyone shed some light?
  8. Hi all, I work for a small sized (but quickly growing) school district and we have some aging hardware that is in need of some love. From a processing and network viewpoint these servers should still have plenty of life left in them (for us at least). Rather than buying all-new servers, the thought was to put in some SSDs and RAM to breathe some new life (and performance!) into them. However, after doing some research it looks like the RAID controllers currently installed only take up to ~12GB SSDs (~36GB after a firmware update), so that brings a few questions to mind: 1) Is this size limitation only if using SSDs as cache drives? Would they recognize modern SSDs as 'normal' hard drives with no such size restriction? 2) Is TRIM still an issue with SSDs? or do modern controllers pass TRIM commands? Or does it depend on the specific hardware in use? 3) If I need to replace the controller, what would be a good make/model? I have only ever used onboard Intel RAID, or whatever RAID card comes with a server... I am a little green in this area. They would need to control up to 8 physical drives (2 arrays) each, and I believe the current controller is in a PCIe2 8x slot on the motherboard (will verify tomorrow). 3) Would we be able to get away with using relatively cheap consumer grade drives (like Samsung 850 Pro) instead of straight-up SAS HDDs or SSDs? Our write load is not very high, mostly read operations on databases and running several lightly used VMs on each box. 4) I have typically used RAID6 (or equivalent RAIDz2 or RAID5+1) in servers up to this point so we can take up to 2 drive failures. However in doing a little research everyone seems to think that RAID5 is perfectly acceptable when using SSDs (Intel's website specifically suggests NOT using SSDs in RAID6 and to use RAID1 or 5 instead). Is this generally true? Or should I still be looking at a RAID6 setup for redundancy? 5) My first thought is to make the system drive on each box a RAID1 with 2 SSDs for performance and redundancy... but while that makes sense on a desktop computer, would that affect anything other than the boot time on a server? These are all on battery backups, so they don't shut down often, and boot time is really not a priority. Should we save the money and buy HDDs for the boot drive? Other potentially important info: There are basically 3 servers I am looking to upgrade. Server 1 is for file shares and will just have a bunch of ~1.5-2TB HDDs (server takes 2.5" drives) for the data drive. Performance is not such a huge issue here, the big concern is in bulk storage and redundancy. SSD use here would only be for the OS drives (RAID1) if it would offer any real-world benefit. Server 2 is going to be a HyperV box (nothing against VMWare... we just have more experience using HyperV and are less likely to break it lol). This will hold the VMs with the databases on it, and I would like to put in all SSDs. If we can use high-end consumer SSDs then I would like to put in 4-6 drives in a RAID5 or 6. If we have to use SAS drives then I might just buy 2 larger (512GB) ones and put them in RAID1 Server 3 is going to be another HyperV box for our more pedestrian VMs (print servers, DCs, applicaiton servers, controllers, etc.). First thought is to just buy new HDDs and be done with it... but if we can use something like the 850 pro SSDs then I would like to make Servers 2 & 3 identical. Depending on when this project is complete these servers will either be running Server 2012r2 or 2016. If you need more specifics (make, model, etc.) I can look that up when I am in the district tomorrow. These IBM servers all take smaller 2.5" drives instead of normal HDDs I don't have a specific budget yet, but we are probably looking at $5K or less (preferably much less if I want the district to agree to it lol) in total upgrades to these boxes. That includes drives, controllers, ~100GB of RAM, etc. When I am done I am hoping to consolidate 14 physical servers strewn about the district into 5-6 boxes total. Should be a fun project Thanks for your time everybody!
  9. The LSI MegaRAID 9361-8i SAS3 RAID controller is a 12Gb/s storage adapter featuring PCIe 3.0 host interface, 1GB DDRIII cache memory, 1.2GHz PowerPC 476 dual core 12Gb/s ROC, and 8 12Gb/s SATA+SAS ports. With twice the maximum data transfer rate of 6Gb/s SAS solutions, the 9361-8i delivers enough bandwidth to fully saturate the PCI Express Gen3 bus. Along with superior data transfer rates, the 9361-8i offers enterprise-class data protection and security and supports CacheVault flash cache protection. LSI first-to-market 12Gb/s SAS solutions, such as the 9361-8i, are geared for the performance and security demands of next generation of enterprise data storage. LSI MegaRAID SAS3 9361-8i Review
  10. Hey folks! I'd like to kill two birds with one stone and upgrade my OS from Windows 8.1 to Windows 10 (with a clean install) and "freshen" my SSDs at the same time. I have 4 x 256G Samsung 840 Pro SSDs in hardware RAID 5 (LSI 9750-8i) and it was quite fast and responsive when I first installed my OS but now it's kinda... meh... As we all know TRIM isn't supported in hardware RAID and GC isn't as efficient. Supposedly, Samsung's GC runs when system is idle and people have said for it work you need to have the system running but logged out (as oppose to just locked) but I have my system on 24/7 and only locked when away... that's a lot of degradation in performance. Anyway, I'd need to know if anyone with experience with SSDs and hardware RAID recommends any certain action to be performed prior to re-installing an OS, other than a straight-up formatting. Something to return each individual SSD to the original "new" state or as close to it as possible. Thank yoo! E71
  11. The prices of older SATA SSD have really gone down now. I have a RAID-1 of 2x 3TB HDDs as scratch storage for Steam games, mp3s, ripped movies etc I am not even getting close to 1TB - lot's of games installed that I haven't played for a couple of years so I could go even lower, but 1TB is a convenient size where I won't have to worry about what to keep. I am really tempted to get 2 x 'cheap' 500GB drives and RAID-0 them. Currently 2 480/500/512GB drives are actually cheaper than a single 1TB one and you get better performance (and way better than HDDs) I'm not too worried about losing data as I can re download most of the data and for the rest it is backed up in 3 other places. Altogether it seem like a no-brainer to go ahead and do it. Any downsides to this? Obviously with m2 SSD already available this is not very future proof, but to get advantage of m2, I would need to change the motherboard and the CPU etc so not really feasible.
  12. MB Bios update - now PC wont boot with areca 1260 plugged in I flashed my Gigabyte x99-ud4 motherboard bios a couple of days ago and since then my system wont boot with the Areca 1260 plugged in. It wont display any output to monitor and is non responsive to any keystroke. Ive tried: Rolling back to the origonal BIOS (F6) Upgrading to the latest BIOS (F12) And everything in between (F6-12) Clearing the CMOS Tried swapping the raid card to the first PCI-e slot (in case it had issues on a x4 slot) Removed the BBU Im looking for another stick of RAM to test with and another system but am starting to get desperate (16tb offline until I can fix this) Any help would be much appreciated System Gigabyte x99-ud4 rev1 (Current BIOS F11) WIn 8.1pro (installed on revo 256gb pci ssd) areca 1260 6x4tb in RAID6
  13. Hi all, Thanks for this great forum. I have a pretty common scenario - 14+ years of family data with a spotty backup/sharing strategy, looking to do it right finally. Here's what we have to work with. I'll give as much detail as I can just in case it is useful. 'client' hardware * MacBook Pro (my business laptop) 512GB SSD * MacBook Air (new, wife/kids/homeschool) 256GB SSD * Various iOS devices 'server' * Mac Mini (media server in utility room) 256GB SSD (new, replaces an old Mini) - used to stream video to Apple TV and other iOS, music to various AirPlay, and as an Internet-accessible family web server storage * USB2 1.5TB HDD * USB2 1TB HDD * USB3 2x4TB RAID1 The most valuable data we have is digital photo/video from SLR's and iOS devices going back 14 years. What I currently have is: * MacBook Pro backup via Time Machine to the USB3 2x4TB RAID1 * Current year photo/video on MacBook Pro system drive (backed up only by virtue of Time Machine) * Photos/Videos up to current year stored solely on the USB3 2x4TB RAID1 * Old Mac Mini backup via Time Machine to the USB2 1TB HDD Issues include: * No offsite backup * No reliable way to access photo/video archive (we make family movies for the kids' birthdays that sometimes include past years). The MacBook Pro is the designated "editing station". * The RAID1 is both Time Machine and sole storage location for a priceless photo/video archive To resolve these, here is what I am thinking: * Purchase a fast (Thunderbolt 2?) DAS for photo/video editing station - Which one? Considering LaCie 5big, Promise Pegasus R4/R6, and OWC ThunderBay 4. I have seen you guys recommend buying empty chassis and HGST enterprise grade HDD's. Is that still the recommendation? - Which RAID level? RAID5 to balance performance and some semblance of safety, or RAID0 for maximum performance then clone to a second identical unit as a backup?? HW or SW RAID? - What capacity? - Do not quite understand yet what exactly is backed up from this and to where. Just the raw source files? * Maybe centralize the USB3 2x4TB RAID1 on the Mini and use it as a Time Machine backup of the Mini, MacBook Pro, and the MacBook Air as well as a central place to put business/tax/legal documents. Maybe separate partitions? * Use the 1TB or 1.5TB USB2 as a clone of the boot drive on the MacBook Pro * Purchase a cheap ~$30 USB3 HDD dock and 2 or 3 4TB-or-so HDD's to connect to the Mini and use as a rotating offside backup - what will go on this? from where and can it be automated somehow? - I think this should be business/tax/legal documents, the photo/video library, and ?? I know these solutions potentially have a significant cost, but I feel like I have a 'debt' of years of putting this off, so I want to pay the debt and move forward with something we'll feel good about. Thanks in advance for your help guys! Patrick
  14. [Apologies if this seems misplaced. I don't see a forum on RAID configuration.] I'm configuring a RAID on SSDs. It happens to be 3 drives in a RAID 5, but this is a fairly generic question. I had the idea that I could reduce stripe read-modify-write operations and write amplification by using a segment size of 4k (which equates to a stripe size of 8k, in my case). Then, I build the filesystem with a block size that matches the stripe size. The only downside I can see is the overhead of using such a small stripe size, if the controller is too dumb to combine a sequence of 4k reads into fewer, larger reads. The reason I care about performance of small writes is that this filesystem will be used for software builds, among other things. This involves frequently creating large numbers of small/medium-sized files. From what I can tell, this isn't a very common practice, but I suspect the tendency towards large stripe sizes is a legacy of mechanical disk drives and simple controlers. My RAID "controller" is Linux software RAID (mdadm). Any thoughts?
  15. I have a LSI MegaRAID SAS 9260-16i RAID controller. Somehow it's become corrupted and now it the machine won't load the card on startup. The following message appears: On-board expander FW or mfg image is corrupted. Flash expander FW and mfg image using recovery tools. This is before POST has completed so I can't boot the machine, load the BIOS or load the WebBIOS in order to try and flash the firmware. It's running the latest version of the firmware (2.130.403-3066, package build 12.15.0-0189). I've tried booting the machine with a good RAID card with the corrupt on in a second PCI but it still fails to load the corrupt card. Any help appreciated!
  16. We're setting up a system that needs to be capable of writing 2GB/s to disk. We have 9 SSD's, one with the OS and programs installed, and then an 8 Samsung 840 Pro SSD Raid 0 Array. The raid is hooked up the an Adaptec 8805 raid controller that is supposed to be capable of 12Gb/s per port. I've been struggling to get this system to have write speeds better than about 1850MB/s which is shy of the desired 2GB/s. Is there any ideas on what I could try to optimize my write speeds Thanks
  17. Hi, I recently bought an Alienware area51 desktop (or is it Aurora?) from the Dell outlet. It had all I wanted from it except it is lacking in the HDD department with a single Seagate 1 TB 7200 RPM disk. I figured it would be easy to upgrade to 4 3 TB Seagates which I also bought but haven't installed yet. I'm a bit short of time and would like to install them the fastest and easiest way possible. I tried putting in one of the 3 tb disks and changing in the bios boot screen the hard drive mode to rAID but the computer won't boot. My idea was mirroring that one and then removing the 1 tv and setting in another of the 3 gb drives to make the first 3 gb drive 3 gb again, and then finally the other two. I just would really like to skip making a backup of the boot drive and just copying it over... Any tips? Something I can download or buy (cheap) to do this? Thanks, Xair
  18. LSI MegaRaid - Image Drive from Failed RAID I had a power failure and my LSI MegaRaid 3 disk SAS RAID 0 failed. My attempts to recover the RAID have also failed. I plan to rebuild the array and go with RAID 5 and add another SAS disk. However, before I wipe the drives I would like to image each of them separately. I tried to boot up using a Linux live boot CD. I can boot up but I can't see my drives. I tried just one drive plugged in and booting but the drive can't be seen so I can't call the imaging command. I'm assuming since the MegaRaid SAS controller says that the virtual drive is bad, then it will never mount. I tried to find a SAS to USB cable online so I could just plug in each drive and image them but I can't find such a product. I thought maybe I could use the MegaRaid controller with one drive plugged in and set it as a new Raid 0 so I could get it to mount. However, it seems to want to call the initialize command and want to wipe the drive. Losing the Raid tables for the original Raid wouldn't be a problem, but I don't want the data to be erased. Any suggestions on how I could image each drive? Thanks.
  19. Hi all, This is my first post in this forum, I hope someone can lend me a hand since now I have get out of ideas. I've built a raid 5 on a Asrock z87 extreme6 using six Western Digital RED 4tb that are connected to the six intel controller SATA3 ports, with the aim of creating a 20Tb Raid 5. The OS is Windows 8.1 x64. I created the raid from the BIOS utility selecting 64kb size (I had option of selecting 64 and 128 but the utility reccomended 64kb in case of raid 5). Once in Windows I formatted the raid unit with a 20Gb partition and write speed was really slow (10 MB/s max), even after waiting raid to be completely constructed (it took several hours) After reading and looking for information I enabled write cache and disabled write-cache buffer flushing. I also set simultaneous on in the Intel Rapid Storage Technology panel. After doing this the write speed increased to 25-30 MB/s. I have notice that physical cluster size is 4098 bytes (usual on those 4tb disks), but logical cluster size is 512 bytes: Shouldn't those cluster sizes match to have a good performance? In this case, how to change it? I've try to delete partition and create again, but selecting different cluster sizes for que partition, and the best performance is using 64kb (the stripes size), but it's only 50-60 MB/s actual speed copying a big MKV file from an SSD, and even doing it if doesn't makes any change on the capture where we see the 512 bytes for logical sector size. AS SSD Benchmark seems to tell that partition is correctly aligned: The results of the speed here seems ok, but as I told, real speed never exceeds 58-59 mb/s in writes. I attack a capture of fdisk, I really don't know if it's ok or bad aligned: ATTO DISK Benchmark: Those 6 discs were installed on a NAS, having a write speed higher than 80 mb/s, where is the problem here? Many thanks in advance
  20. I need a two port SATA RAID controller for a pair of SSDs with only one requirement: mirror two OS & applications containing SSDs or HDDs and not crash the OS when one fails. Reliability and price are the main considerations. Hot-swapping is not a requirement (overkill), neither is speed an issue because system is already more than adequate to the task. Thanks.
  21. Hello, My board was Gigabyte EP45-DS3R with Intel ® ICH10R. There is two disc in raid 1 mirror configuration Array_0000. Before two weeks i had problem with windows boot . I was disconnected second disc and installed fresh Windows installation on first disc. I need to back second disk in system and sync data. What is the proper way to do that?
  22. Can someone please confirm what it means to "initialize" a virtual drive on an LSI MegaRAID card? I've read in some places that it's writing zeroes to all the drives (Fast Init only does it to the first/last 10 MB, Slow and Background Init do the entire drive space). Assuming I'm correct, if I'm setting up a RAID 1 or RAID 10 with SSDs that I've just secure erased, wouldn't writing zeroes to the drives (A) be total overkill since the drives are already consistent and ( severely impact performance until GC has time to clean things up? I've also heard that performance may be degraded if the RAID card isn't sure the drives are consistent. Is this true? If so, would running a consistency check shortly after setting up the virtual drive solve that? Many thanks in advance!
  23. Hi. I have an Areca 1880IX-12 with a raid-6 set, containing 8x2TB. The 1880-card malfunctioned (the internal expander died), and I received a new card from Areca. The new card shows two of the devices as "Missing" from the raid set, and the same disks are labeled "Free" in the device list. I have not done anything yet, other than let the system boot. There are also two disks in a raid-1 set (OS), and this raid set is acting fine. Because this is raid-6, missing 2 out of 8 disks should not be a disaster, and my first thought was to add the two missing devices as hot-spares to the raid set, and then let the raid set rebuild itself. But to be sure to do this in the right order, I have searched and read a lot of different posts about the topic, but none of them seems to be exactly like my situation. I therefore thought that it's better to be safe than sorry, and ask here if someone can guide me back to a healthy raid set. What are the correct commands, and in what order? Additional info in screencaps. If more information is needed, please don't hesitate to ask. =)
  24. Hi I have 3, 3TB WD Red drives on a LSI 9260CV in raid 5. Read = always read ahead IO Policy = Cached IO Write = Always write back OS= Win 2012R2 latest, firmware Drivers lastest I did some benchmarks and I can only explain the sequential results. The rest I dont get it, Raid5 is supposed to have slower writes due to parity calculation. The LSI card has 512K cache and for sure it influenses the results: the numbers get smaller as the ratio cahe/file size changes. While this is normal, there is always 50% more throughtput for random writes and this is consistent whatever the file size. I would expect that ratio to drop also as the test file grows bigger (if the cache was the reason for this strange performance). Here are results for a tiny file that fits entirely into the boards cache, so numbers reflect PCI transfer not disk perfomance. What did I miss ? m a r c
  25. 1)Which RAID is best suitable for below specifications for Complex IO systems? **For a 10 TB disk **protection against single drive failure **speed is not required **Low cost is desired. ** minimum wasted space and also please let me know how to find model number, quantity, RPM speed, seek time and cost for the drive mentioned above. 2))Which RAID is best suitable for below specifications for Complex IOsystems? **for a 2 TB disk ** best speed ** NO Redundancy ** minimum wasted space If not RAID where SSD is used?