Kevin OBrien

Admin
  • Content count

    1889
  • Joined

  • Last visited

  • Days Won

    25

Everything posted by Kevin OBrien

  1. Given this is based around Windows software RAID, have you tried loading the disks into a JBOD (4 at a time), connecting them to a dedicated Windows Server 2008 R2 or S2012 R2 platform and importing the RAID groups? It would also be wise to contact a data recovery company, since there may be an easy solution to recovering the data through a RAID group import in a secondary system.
  2. Western Digital announced the WD Sentinel DX4000 last month in an effort to bring easy to configure NAS storage to SMBs with fewer than 25 employees. At the core of the Sentinel is Microsoft's Storage Server 2008 R2 Essentials, an Intel 1.8 GHz Atom dual-core processor and two or four WD RE4-GP hard drives. Western Digital sells the Sentinel pre-configured with drives in either 4TB or 8TB models right now, with 6TB and 12TB models coming early next year. Given the Sentinel arrives with drives and is ready to roll, all it takes to get going is powering the unit up and connecting to the network, which is great for those who aren't used to dealing with storage appliances. Given the Microsoft software though, there's still plenty to tweak under the covers should SMBs with access to IT professionals care to do so. Read Full Review
  3. Intel SATA chipsets generally perform much better in single drive situations than the LSI HBAs. Those numbers you are seeing look pretty normal. You'll notice more with higher loads or additional drives.
  4. LSI 9200 and 9300 series I'd say are more common than dirt. Most haven't changed dramatically over the years either. Getting those versus Adaptec will basically cover the majority of the market.
  5. The HK4R isn't SAS and the price you have is way off. CDW retail is 1294 https://www.cdw.com/shop/products/Toshiba-HK4R-Series-THNSN81Q92CSE-solid-state-drive-1920-GB-SATA-6Gb/4421240.aspx Volume pricing I'm sure is much better.
  6. Data corruption can be a tricky one to pin down. Bad cables, bad drives... consumer devices don't have the best error correction capabilities. See if the EVO stays clean.
  7. If you reboot node B does it move back to A? In windows there was also an area to manually move it back and forth.
  8. That is correct and why we reported the 4 individual VM values in the charts. The MySQL databases only need ~300GB each, the SQL server stuff needs about 700GB each.
  9. Are you seeing any errors on the NAS itself? What is cataloging the files and what are you using to watch them? That end might be corrupted.
  10. Sadly we haven't had one of those cards come in the lab, otherwise I'd move it into something generic and see what happens.
  11. That method sounds interesting, but unless you are breaking the drive yourself, how will you only bust the SMR drives so the rebuild activity only happens on an PMR drive?
  12. Its not too difficult, but it is a pretty detail-oriented step through process. Deleting once the data is off the drives is easy. But you need to know how to transfer the data off and confirm its off. On the storage side, you need to know how to navigate the RAID card pre-boot setup interface. If you just pull drives and insert new ones the server will freak out and not do anything. You need to delete old RAID group, add new drives, init them or flag as for use in RAID, then put them into a RIAD group that makes sense based on their capacity and usage profile. This means knowing when and why to pick RAID10, RAID6 as well as block size and read/write cache. Then you init the disk grounp, go back into VMware, find the unformatted volume and make that into your new Datastore.
  13. Since these are all located on existing datastores, you best bet is going to be doing storage svmotions, getting the existing stuff migrated to spare volumes. Once that is done, blow away your current datastores, then create your new ones. In no way should you consider doing a RAID expansion with this data. You don't want to blow away your backups accidentally. RAID expansions put the array into a degraded state while the disks rebuild for each disk in the array as they are swapped out. You are rolling the dice multiple times doing this, and even if a disk doesn't fail there isn't any guarantee the raid expansion capability even works without errors.
  14. Lets take a step back, is this storage presented raw to the Veeam side, or is it formatted as a datastore with the Veeam VM loaded onto it?
  15. The new Seagate IronWolf HDD is designed for all types of NAS use cases, including those that leverage multi-RAID environments, with capacities spanning up to 10TB. Seagate has had a lot of success with their purpose-built drives in the past, such as the Seagate Enterprise, Seagate NAS, and Seagate SkyHawk Surveillance HDDs. And their new line is certainly specced to follow in their footsteps. Featuring multi-tier caching technology, this uniquely named drive is built to handle the constant vibration that is inherent in 24/7 NAS spindle drives and thrives under heavy user-workload rates in a high data-traffic network. Seagate IronWolf HDD 10TB Review
  16. What cards, what devices? For a given 10G network card (or 16Gb FC, 40GbE, etc), the limiting factor is the 6GB/s bandwidth through the PCIe slot itself. I've been able to push 20Gbs through two ports combined on network cards before. The trick is making sure your network settings allow it.
  17. Those figures will always change, although the way your utility is reporting on them is somewhat strange. Those figures count the total lifetime 512-byte read/writes... so they will always be changing as you use the drive.
  18. The cheapest thing that came to mind is the LSI 9200 series: http://www.ebay.com/itm/like/252140860944?lpid=82&chn=ps&ul_noapp=true Dell probably has an OEM version of it, but the LSI retail one will work. Just look for the one with the "e" in the name, meaning it has the external port.
  19. As much as I love Synology, in the price range you are talking about, you should really be looking at the NetApp AFF A200. Much, much higher real world performance than the FlashStation, which we had fully decked out in SAS3 Toshiba flash. Like night and day differences.
  20. Do you have a PCIe slot open? Most of those tape units require a external SAS connection on the computer you are connecting it to for communication.
  21. Thermal output is completely linked to power draw. If you draw less power you are going to throw off less BTUs of energy. No other magic but that.
  22. Crap, didn't see that. Thanks for pointing that out! Point still stands on the 2TB model.
  23. On SSD, before taking a jump in that direction, please research some of the corner-case support issues some have encountered with it: https://www.reddit.com/r/sysadmin/comments/609e98/another_catastrophic_failure_on_our_windows/ I've personally heard fantastic things about Storage Spaces Direct in regards to performance, but I think there are support issues that still need to be honed out. You just don't encounter the same scale of problems with other solutions out of the gate. On the all-flash side, I've been incredibly impressed with the NetApp AFF-series. I'm playing around with the A200 right now and its been awesome. Performance hit from inline compression and dedupe isn't that bad either.
  24. Both of those drives are going to be roughly in the same ballpark for noise. What platform will you be using and where will it be located? Rubber isolation to help with seek noise and just distance away from you will help the most.
  25. Fun benchmark... not bad. Performance wise though I've attached the Kingston. Incredibly terrible at low block size, and you can see the dips during the large block transfer sections.