Kevin OBrien

Admin
  • Content count

    1880
  • Joined

  • Last visited

Community Reputation

50 Excellent

About Kevin OBrien

  • Rank
    StorageReview Editor
  • Birthday 08/11/83

Contact Methods

  • ICQ
    0

Profile Information

  • Location
    Somewhere, USA
  • Interests
    Motorcycling, camping, geocaching, breaking electronics

Recent Profile Visitors

8118 profile views
  1. That method sounds interesting, but unless you are breaking the drive yourself, how will you only bust the SMR drives so the rebuild activity only happens on an PMR drive?
  2. Its not too difficult, but it is a pretty detail-oriented step through process. Deleting once the data is off the drives is easy. But you need to know how to transfer the data off and confirm its off. On the storage side, you need to know how to navigate the RAID card pre-boot setup interface. If you just pull drives and insert new ones the server will freak out and not do anything. You need to delete old RAID group, add new drives, init them or flag as for use in RAID, then put them into a RIAD group that makes sense based on their capacity and usage profile. This means knowing when and why to pick RAID10, RAID6 as well as block size and read/write cache. Then you init the disk grounp, go back into VMware, find the unformatted volume and make that into your new Datastore.
  3. Since these are all located on existing datastores, you best bet is going to be doing storage svmotions, getting the existing stuff migrated to spare volumes. Once that is done, blow away your current datastores, then create your new ones. In no way should you consider doing a RAID expansion with this data. You don't want to blow away your backups accidentally. RAID expansions put the array into a degraded state while the disks rebuild for each disk in the array as they are swapped out. You are rolling the dice multiple times doing this, and even if a disk doesn't fail there isn't any guarantee the raid expansion capability even works without errors.
  4. Lets take a step back, is this storage presented raw to the Veeam side, or is it formatted as a datastore with the Veeam VM loaded onto it?
  5. What cards, what devices? For a given 10G network card (or 16Gb FC, 40GbE, etc), the limiting factor is the 6GB/s bandwidth through the PCIe slot itself. I've been able to push 20Gbs through two ports combined on network cards before. The trick is making sure your network settings allow it.
  6. Those figures will always change, although the way your utility is reporting on them is somewhat strange. Those figures count the total lifetime 512-byte read/writes... so they will always be changing as you use the drive.
  7. The cheapest thing that came to mind is the LSI 9200 series: http://www.ebay.com/itm/like/252140860944?lpid=82&chn=ps&ul_noapp=true Dell probably has an OEM version of it, but the LSI retail one will work. Just look for the one with the "e" in the name, meaning it has the external port.
  8. As much as I love Synology, in the price range you are talking about, you should really be looking at the NetApp AFF A200. Much, much higher real world performance than the FlashStation, which we had fully decked out in SAS3 Toshiba flash. Like night and day differences.
  9. Do you have a PCIe slot open? Most of those tape units require a external SAS connection on the computer you are connecting it to for communication.
  10. Thermal output is completely linked to power draw. If you draw less power you are going to throw off less BTUs of energy. No other magic but that.
  11. Crap, didn't see that. Thanks for pointing that out! Point still stands on the 2TB model.
  12. On SSD, before taking a jump in that direction, please research some of the corner-case support issues some have encountered with it: https://www.reddit.com/r/sysadmin/comments/609e98/another_catastrophic_failure_on_our_windows/ I've personally heard fantastic things about Storage Spaces Direct in regards to performance, but I think there are support issues that still need to be honed out. You just don't encounter the same scale of problems with other solutions out of the gate. On the all-flash side, I've been incredibly impressed with the NetApp AFF-series. I'm playing around with the A200 right now and its been awesome. Performance hit from inline compression and dedupe isn't that bad either.
  13. Both of those drives are going to be roughly in the same ballpark for noise. What platform will you be using and where will it be located? Rubber isolation to help with seek noise and just distance away from you will help the most.
  14. Fun benchmark... not bad. Performance wise though I've attached the Kingston. Incredibly terrible at low block size, and you can see the dips during the large block transfer sections.
  15. We used the stock driver. In every single case in the past where a custom driver is different, it had zero impact on mixed workloads. There might have been a small uptick in straight line numbers but that was it.