Kevin OBrien

  • Content count

  • Joined

  • Last visited

Everything posted by Kevin OBrien

  1. The new Seagate IronWolf HDD is designed for all types of NAS use cases, including those that leverage multi-RAID environments, with capacities spanning up to 10TB. Seagate has had a lot of success with their purpose-built drives in the past, such as the Seagate Enterprise, Seagate NAS, and Seagate SkyHawk Surveillance HDDs. And their new line is certainly specced to follow in their footsteps. Featuring multi-tier caching technology, this uniquely named drive is built to handle the constant vibration that is inherent in 24/7 NAS spindle drives and thrives under heavy user-workload rates in a high data-traffic network. Seagate IronWolf HDD 10TB Review
  2. What cards, what devices? For a given 10G network card (or 16Gb FC, 40GbE, etc), the limiting factor is the 6GB/s bandwidth through the PCIe slot itself. I've been able to push 20Gbs through two ports combined on network cards before. The trick is making sure your network settings allow it.
  3. Those figures will always change, although the way your utility is reporting on them is somewhat strange. Those figures count the total lifetime 512-byte read/writes... so they will always be changing as you use the drive.
  4. The cheapest thing that came to mind is the LSI 9200 series: Dell probably has an OEM version of it, but the LSI retail one will work. Just look for the one with the "e" in the name, meaning it has the external port.
  5. As much as I love Synology, in the price range you are talking about, you should really be looking at the NetApp AFF A200. Much, much higher real world performance than the FlashStation, which we had fully decked out in SAS3 Toshiba flash. Like night and day differences.
  6. Do you have a PCIe slot open? Most of those tape units require a external SAS connection on the computer you are connecting it to for communication.
  7. Thermal output is completely linked to power draw. If you draw less power you are going to throw off less BTUs of energy. No other magic but that.
  8. Crap, didn't see that. Thanks for pointing that out! Point still stands on the 2TB model.
  9. On SSD, before taking a jump in that direction, please research some of the corner-case support issues some have encountered with it: I've personally heard fantastic things about Storage Spaces Direct in regards to performance, but I think there are support issues that still need to be honed out. You just don't encounter the same scale of problems with other solutions out of the gate. On the all-flash side, I've been incredibly impressed with the NetApp AFF-series. I'm playing around with the A200 right now and its been awesome. Performance hit from inline compression and dedupe isn't that bad either.
  10. Both of those drives are going to be roughly in the same ballpark for noise. What platform will you be using and where will it be located? Rubber isolation to help with seek noise and just distance away from you will help the most.
  11. Fun benchmark... not bad. Performance wise though I've attached the Kingston. Incredibly terrible at low block size, and you can see the dips during the large block transfer sections.
  12. We used the stock driver. In every single case in the past where a custom driver is different, it had zero impact on mixed workloads. There might have been a small uptick in straight line numbers but that was it.
  13. Out of the gate the RAID card vs HBA (or the Intel chipset that we test with in our consumer platform) will chop away quite a bit of performance. Sometimes 2-3x. The 12Gb/s LSI based cards do have an HBA mode that can offer much higher performance than what you are seeing through that older RAID card. Even better is a straight up LSI 9300-8i HBA and sticking with software RAID in the OS.
  14. That RAID card you are using will never be as fast as a dedicated HBA or an Intel chipset connecting the SSDs to a SATA bus. So much of that bursty speed you see in a review would be lopped off by the 9260-based RAID card. What long term workloads will you be running on those SSDs? Since you did see our review I'm curious what pushed you in the direction of this particular model to purchase. It didn't response well at all in any of our server workload profiles.
  15. What SSDs specifically are you working with? That RAID card looks like an older 9200-series LSI model, which if you are using just hard drives or slower SSDs you may not see a difference.
  16. Either Samsung offering would be a solid bet. I'd run away from that Transcend drive. The WD offering should be stable and reliable, but not as fast as the 850 EVO. So my order would be 1. 850 EVO 2. WD Green 3. 750 EVO
  17. But this isn't a civic pulling a yacht test. Those synthetic benchmarks are used across all drives, workload doesn't change there. This would be like comparing two cheap economy cars and realizing that one not only gets good gas mileage, but handles really well still. This is the same type of spread you see on NVMe or SAS products. Many drives offer the same or similar level of performance to a degree, but the thing that differentiates each model is the DWPD.
  18. In the StorageReview Enterprise Lab we use a variety of power management tools and battery backups. If a data center is not generating their own power or has sufficient backup generators, an interruption in power supply is inevitable (in the cases above, interruptions are still a possibility). For our testing environments we use Eaton gear from 5PX UPS for managing a subset of systems up to the BladeUPS and its ability to power several racks. The 5PX UPS has been running in our lab for over 5 years and the time has finally come to perform some scheduled maintenance on it and the connected EBM (Extended Battery Module). In this In the Lab we go over the steps required to swap out the batteries in the UPS and replace the EBM. In The Lab: Eaton 5PX & EBM Battery Replacement
  19. Have you reached out to VMware or Adaptec on the issue to see if they can replicate?
  20. There is an element of performance as viewed through the lens of a synthetic loadgen tool versus the real world. This is the write speed chart in VMware seen as I'm moving two large VMs onto a datastore sitting on the 1.92TB Max.
  21. Performance over the lifespan of the drive isn't a great metric to go by. If its too slow to accomplish certain tasks at the onset, knowing it will stay that slow day after day doesn't really help you out that much. We did get the 1.92TB Max in today, which we will be comparing in the same tests.
  22. On original lithium battery Arlo: My parents ended up getting some cameras this past winter. SE WI, really, really cold. They had a few weeks where the temp was hovering close to 0F. Installed before Christmas, one camera is at 46% and the other is at 88%. The one at 46% had a bit of a snafu one day where it was triggered into live view mode and not turned off for an hour or so. Ended up draining the battery fast, but leveled off afterwards. That was like 2 months ago. So far I'm at 98% on one Pro and 88% on the other Pro. Both were charged to 100% and installed a week or so ago. The light usage camera captures a few events a day (driveway) while the backyard one captures many long videos. Since we let our dog out or play in the backyard it probably gets 10-20 45 second videos a day easy. Those are also peak quality.
  23. That is a valid point, but a bigger problem is this isn't the first time we've seen this. They seem to slump in workloads that aren't synthetic, which as you can see the primary comparable to them in that Toshiba comparable didn't have the problem in either workload. We do have the larger capacity Max drive coming in soon, which is more geared towards these workloads to see how it stacks up.
  24. At this pricepoint if budget is a concern I'd lean heavily towards a whitebox server. Go with something that supports newer components and get a bit more life from the system. Also not having to sort out support contracts for software/firmware updates is a huge plus.
  25. I think load cycle count is one that freaks out a lot of people, but I'm not sure has a direct correlation to failure. That said all of my NAS devices and units I configure are setup to disable spindown. On the Smart stuff, what you may have had is some pending bad sectors prompt the failure warning, and pushing the full scan through the tool got some of that to process into remapped bad sectors or something along those lines. The drive is designed to handle a set number, but again if you have 100 drives with no problem and 1 with outstanding errors, even if its "safe" I'd not risk losing data.