Kevin OBrien

Admin
  • Content count

    1889
  • Joined

  • Last visited

  • Days Won

    25

Kevin OBrien last won the day on May 26

Kevin OBrien had the most liked content!

Community Reputation

51 Excellent

About Kevin OBrien

  • Rank
    StorageReview Editor
  • Birthday 08/11/83

Contact Methods

  • ICQ
    0

Profile Information

  • Location
    Somewhere, USA
  • Interests
    Motorcycling, camping, geocaching, breaking electronics

Recent Profile Visitors

8533 profile views
  1. Given this is based around Windows software RAID, have you tried loading the disks into a JBOD (4 at a time), connecting them to a dedicated Windows Server 2008 R2 or S2012 R2 platform and importing the RAID groups? It would also be wise to contact a data recovery company, since there may be an easy solution to recovering the data through a RAID group import in a secondary system.
  2. Intel SATA chipsets generally perform much better in single drive situations than the LSI HBAs. Those numbers you are seeing look pretty normal. You'll notice more with higher loads or additional drives.
  3. LSI 9200 and 9300 series I'd say are more common than dirt. Most haven't changed dramatically over the years either. Getting those versus Adaptec will basically cover the majority of the market.
  4. The HK4R isn't SAS and the price you have is way off. CDW retail is 1294 https://www.cdw.com/shop/products/Toshiba-HK4R-Series-THNSN81Q92CSE-solid-state-drive-1920-GB-SATA-6Gb/4421240.aspx Volume pricing I'm sure is much better.
  5. Data corruption can be a tricky one to pin down. Bad cables, bad drives... consumer devices don't have the best error correction capabilities. See if the EVO stays clean.
  6. If you reboot node B does it move back to A? In windows there was also an area to manually move it back and forth.
  7. That is correct and why we reported the 4 individual VM values in the charts. The MySQL databases only need ~300GB each, the SQL server stuff needs about 700GB each.
  8. Are you seeing any errors on the NAS itself? What is cataloging the files and what are you using to watch them? That end might be corrupted.
  9. Sadly we haven't had one of those cards come in the lab, otherwise I'd move it into something generic and see what happens.
  10. That method sounds interesting, but unless you are breaking the drive yourself, how will you only bust the SMR drives so the rebuild activity only happens on an PMR drive?
  11. Its not too difficult, but it is a pretty detail-oriented step through process. Deleting once the data is off the drives is easy. But you need to know how to transfer the data off and confirm its off. On the storage side, you need to know how to navigate the RAID card pre-boot setup interface. If you just pull drives and insert new ones the server will freak out and not do anything. You need to delete old RAID group, add new drives, init them or flag as for use in RAID, then put them into a RIAD group that makes sense based on their capacity and usage profile. This means knowing when and why to pick RAID10, RAID6 as well as block size and read/write cache. Then you init the disk grounp, go back into VMware, find the unformatted volume and make that into your new Datastore.
  12. Since these are all located on existing datastores, you best bet is going to be doing storage svmotions, getting the existing stuff migrated to spare volumes. Once that is done, blow away your current datastores, then create your new ones. In no way should you consider doing a RAID expansion with this data. You don't want to blow away your backups accidentally. RAID expansions put the array into a degraded state while the disks rebuild for each disk in the array as they are swapped out. You are rolling the dice multiple times doing this, and even if a disk doesn't fail there isn't any guarantee the raid expansion capability even works without errors.
  13. Lets take a step back, is this storage presented raw to the Veeam side, or is it formatted as a datastore with the Veeam VM loaded onto it?
  14. What cards, what devices? For a given 10G network card (or 16Gb FC, 40GbE, etc), the limiting factor is the 6GB/s bandwidth through the PCIe slot itself. I've been able to push 20Gbs through two ports combined on network cards before. The trick is making sure your network settings allow it.
  15. Those figures will always change, although the way your utility is reporting on them is somewhat strange. Those figures count the total lifetime 512-byte read/writes... so they will always be changing as you use the drive.