Kevin OBrien

Admin
  • Content count

    1883
  • Joined

  • Last visited

Everything posted by Kevin OBrien

  1. That is correct and why we reported the 4 individual VM values in the charts. The MySQL databases only need ~300GB each, the SQL server stuff needs about 700GB each.
  2. Are you seeing any errors on the NAS itself? What is cataloging the files and what are you using to watch them? That end might be corrupted.
  3. Sadly we haven't had one of those cards come in the lab, otherwise I'd move it into something generic and see what happens.
  4. That method sounds interesting, but unless you are breaking the drive yourself, how will you only bust the SMR drives so the rebuild activity only happens on an PMR drive?
  5. Its not too difficult, but it is a pretty detail-oriented step through process. Deleting once the data is off the drives is easy. But you need to know how to transfer the data off and confirm its off. On the storage side, you need to know how to navigate the RAID card pre-boot setup interface. If you just pull drives and insert new ones the server will freak out and not do anything. You need to delete old RAID group, add new drives, init them or flag as for use in RAID, then put them into a RIAD group that makes sense based on their capacity and usage profile. This means knowing when and why to pick RAID10, RAID6 as well as block size and read/write cache. Then you init the disk grounp, go back into VMware, find the unformatted volume and make that into your new Datastore.
  6. Since these are all located on existing datastores, you best bet is going to be doing storage svmotions, getting the existing stuff migrated to spare volumes. Once that is done, blow away your current datastores, then create your new ones. In no way should you consider doing a RAID expansion with this data. You don't want to blow away your backups accidentally. RAID expansions put the array into a degraded state while the disks rebuild for each disk in the array as they are swapped out. You are rolling the dice multiple times doing this, and even if a disk doesn't fail there isn't any guarantee the raid expansion capability even works without errors.
  7. Lets take a step back, is this storage presented raw to the Veeam side, or is it formatted as a datastore with the Veeam VM loaded onto it?
  8. The new Seagate IronWolf HDD is designed for all types of NAS use cases, including those that leverage multi-RAID environments, with capacities spanning up to 10TB. Seagate has had a lot of success with their purpose-built drives in the past, such as the Seagate Enterprise, Seagate NAS, and Seagate SkyHawk Surveillance HDDs. And their new line is certainly specced to follow in their footsteps. Featuring multi-tier caching technology, this uniquely named drive is built to handle the constant vibration that is inherent in 24/7 NAS spindle drives and thrives under heavy user-workload rates in a high data-traffic network. Seagate IronWolf HDD 10TB Review
  9. What cards, what devices? For a given 10G network card (or 16Gb FC, 40GbE, etc), the limiting factor is the 6GB/s bandwidth through the PCIe slot itself. I've been able to push 20Gbs through two ports combined on network cards before. The trick is making sure your network settings allow it.
  10. Those figures will always change, although the way your utility is reporting on them is somewhat strange. Those figures count the total lifetime 512-byte read/writes... so they will always be changing as you use the drive.
  11. The cheapest thing that came to mind is the LSI 9200 series: http://www.ebay.com/itm/like/252140860944?lpid=82&chn=ps&ul_noapp=true Dell probably has an OEM version of it, but the LSI retail one will work. Just look for the one with the "e" in the name, meaning it has the external port.
  12. As much as I love Synology, in the price range you are talking about, you should really be looking at the NetApp AFF A200. Much, much higher real world performance than the FlashStation, which we had fully decked out in SAS3 Toshiba flash. Like night and day differences.
  13. Do you have a PCIe slot open? Most of those tape units require a external SAS connection on the computer you are connecting it to for communication.
  14. Thermal output is completely linked to power draw. If you draw less power you are going to throw off less BTUs of energy. No other magic but that.
  15. Crap, didn't see that. Thanks for pointing that out! Point still stands on the 2TB model.
  16. On SSD, before taking a jump in that direction, please research some of the corner-case support issues some have encountered with it: https://www.reddit.com/r/sysadmin/comments/609e98/another_catastrophic_failure_on_our_windows/ I've personally heard fantastic things about Storage Spaces Direct in regards to performance, but I think there are support issues that still need to be honed out. You just don't encounter the same scale of problems with other solutions out of the gate. On the all-flash side, I've been incredibly impressed with the NetApp AFF-series. I'm playing around with the A200 right now and its been awesome. Performance hit from inline compression and dedupe isn't that bad either.
  17. Both of those drives are going to be roughly in the same ballpark for noise. What platform will you be using and where will it be located? Rubber isolation to help with seek noise and just distance away from you will help the most.
  18. Fun benchmark... not bad. Performance wise though I've attached the Kingston. Incredibly terrible at low block size, and you can see the dips during the large block transfer sections.
  19. We used the stock driver. In every single case in the past where a custom driver is different, it had zero impact on mixed workloads. There might have been a small uptick in straight line numbers but that was it.
  20. Out of the gate the RAID card vs HBA (or the Intel chipset that we test with in our consumer platform) will chop away quite a bit of performance. Sometimes 2-3x. The 12Gb/s LSI based cards do have an HBA mode that can offer much higher performance than what you are seeing through that older RAID card. Even better is a straight up LSI 9300-8i HBA and sticking with software RAID in the OS.
  21. That RAID card you are using will never be as fast as a dedicated HBA or an Intel chipset connecting the SSDs to a SATA bus. So much of that bursty speed you see in a review would be lopped off by the 9260-based RAID card. What long term workloads will you be running on those SSDs? Since you did see our review I'm curious what pushed you in the direction of this particular model to purchase. It didn't response well at all in any of our server workload profiles.
  22. What SSDs specifically are you working with? That RAID card looks like an older 9200-series LSI model, which if you are using just hard drives or slower SSDs you may not see a difference.
  23. Either Samsung offering would be a solid bet. I'd run away from that Transcend drive. The WD offering should be stable and reliable, but not as fast as the 850 EVO. So my order would be 1. 850 EVO 2. WD Green 3. 750 EVO
  24. But this isn't a civic pulling a yacht test. Those synthetic benchmarks are used across all drives, workload doesn't change there. This would be like comparing two cheap economy cars and realizing that one not only gets good gas mileage, but handles really well still. This is the same type of spread you see on NVMe or SAS products. Many drives offer the same or similar level of performance to a degree, but the thing that differentiates each model is the DWPD.
  25. In the StorageReview Enterprise Lab we use a variety of power management tools and battery backups. If a data center is not generating their own power or has sufficient backup generators, an interruption in power supply is inevitable (in the cases above, interruptions are still a possibility). For our testing environments we use Eaton gear from 5PX UPS for managing a subset of systems up to the BladeUPS and its ability to power several racks. The 5PX UPS has been running in our lab for over 5 years and the time has finally come to perform some scheduled maintenance on it and the connected EBM (Extended Battery Module). In this In the Lab we go over the steps required to swap out the batteries in the UPS and replace the EBM. In The Lab: Eaton 5PX & EBM Battery Replacement