Kevin OBrien

Admin
  • Content count

    1862
  • Joined

  • Last visited

Everything posted by Kevin OBrien

  1. Either Samsung offering would be a solid bet. I'd run away from that Transcend drive. The WD offering should be stable and reliable, but not as fast as the 850 EVO. So my order would be 1. 850 EVO 2. WD Green 3. 750 EVO
  2. But this isn't a civic pulling a yacht test. Those synthetic benchmarks are used across all drives, workload doesn't change there. This would be like comparing two cheap economy cars and realizing that one not only gets good gas mileage, but handles really well still. This is the same type of spread you see on NVMe or SAS products. Many drives offer the same or similar level of performance to a degree, but the thing that differentiates each model is the DWPD.
  3. In the StorageReview Enterprise Lab we use a variety of power management tools and battery backups. If a data center is not generating their own power or has sufficient backup generators, an interruption in power supply is inevitable (in the cases above, interruptions are still a possibility). For our testing environments we use Eaton gear from 5PX UPS for managing a subset of systems up to the BladeUPS and its ability to power several racks. The 5PX UPS has been running in our lab for over 5 years and the time has finally come to perform some scheduled maintenance on it and the connected EBM (Extended Battery Module). In this In the Lab we go over the steps required to swap out the batteries in the UPS and replace the EBM. In The Lab: Eaton 5PX & EBM Battery Replacement
  4. Have you reached out to VMware or Adaptec on the issue to see if they can replicate?
  5. There is an element of performance as viewed through the lens of a synthetic loadgen tool versus the real world. This is the write speed chart in VMware seen as I'm moving two large VMs onto a datastore sitting on the 1.92TB Max.
  6. Performance over the lifespan of the drive isn't a great metric to go by. If its too slow to accomplish certain tasks at the onset, knowing it will stay that slow day after day doesn't really help you out that much. We did get the 1.92TB Max in today, which we will be comparing in the same tests.
  7. On original lithium battery Arlo: My parents ended up getting some cameras this past winter. SE WI, really, really cold. They had a few weeks where the temp was hovering close to 0F. Installed before Christmas, one camera is at 46% and the other is at 88%. The one at 46% had a bit of a snafu one day where it was triggered into live view mode and not turned off for an hour or so. Ended up draining the battery fast, but leveled off afterwards. That was like 2 months ago. So far I'm at 98% on one Pro and 88% on the other Pro. Both were charged to 100% and installed a week or so ago. The light usage camera captures a few events a day (driveway) while the backyard one captures many long videos. Since we let our dog out or play in the backyard it probably gets 10-20 45 second videos a day easy. Those are also peak quality.
  8. That is a valid point, but a bigger problem is this isn't the first time we've seen this. They seem to slump in workloads that aren't synthetic, which as you can see the primary comparable to them in that Toshiba comparable didn't have the problem in either workload. We do have the larger capacity Max drive coming in soon, which is more geared towards these workloads to see how it stacks up.
  9. At this pricepoint if budget is a concern I'd lean heavily towards a whitebox server. Go with something that supports newer components and get a bit more life from the system. Also not having to sort out support contracts for software/firmware updates is a huge plus.
  10. I think load cycle count is one that freaks out a lot of people, but I'm not sure has a direct correlation to failure. That said all of my NAS devices and units I configure are setup to disable spindown. On the Smart stuff, what you may have had is some pending bad sectors prompt the failure warning, and pushing the full scan through the tool got some of that to process into remapped bad sectors or something along those lines. The drive is designed to handle a set number, but again if you have 100 drives with no problem and 1 with outstanding errors, even if its "safe" I'd not risk losing data.
  11. I can't say I've seen anything like that. Most of the current offerings are a H/W encryption built-in and you enable that with a passcode.
  12. I think part of the issue you describe is flowing through the RAID card, I'm assuming in RAID0 for that drive, versus pass-through/HBA mode. If that's the case, its the hardware RAID holding that SSD back.
  13. While Adam is working on the game performance bit, I wanted to clarify why the 4TB data is a bit skewed. The 4TB drive has an SMR HDD inside, which treats bursts of random write activity as sequential for short duration until its buffer fills up. So in some small areas the 4TB drive can win out, but the problem comes down to longer sustained activity, or when reading that data back again you get the performance hit instead.
  14. The new Seagate IronWolf HDD is designed for all types of NAS use cases, including those that leverage multi-RAID environments, with capacities spanning up to 10TB. Seagate has had a lot of success with their purpose-built drives in the past, such as the Seagate Enterprise, Seagate NAS, and Seagate SkyHawk Surveillance HDDs. And their new line is certainly specced to follow in their footsteps. Featuring multi-tier caching technology, this uniquely named drive is built to handle the constant vibration that is inherent in 24/7 NAS spindle drives and thrives under heavy user-workload rates in a high data-traffic network. Seagate IronWolf HDD 10TB Review
  15. The figures shown are for a group of 8 Skyhawk HDDs in a RAID10 config inside a Synology DS1815+. We don't test the drive alone, since noone is going to have just 1 NAS drive.
  16. On the iSCSI question, we did have it configured for access over to LAN ports across two IP subnets. RAID6 is an option, as well as RAID5. We just tested RAID10.
  17. With all systems equipped with 10G, we only measure performance over the 10G interface. It is safe to assume that 1G support in this case would peak single stream performance to 100MB/s, or multiple stream performance to number of 1G interfaces x 100MB/s, up to the speed of the 10G performance numbers.
  18. Serving as an upgrade to the Z620, the HP Z640 is designed to give professionals who work with resource-heavy applications an extra boost in their productivity. The Z640 is also targeted to businesses that simply want to equip their employees with a high-performance workstation at a mid-range price. As you can see from its comprehensive set of options listed in the specifications below, the HP Z640 is a highly-customizable workstation that can be used for a wide range of use cases, allowing businesses to create a very specific build to accurately suit their needs. The HP workstation also offers a complete tool-less design that gives administrators easy access for quick component upgrades and maintenance. HP Z640 Workstation Review
  19. Hard to say without giving it a shot first. Firmware may also be a problem if the software expects to see it installed on Dell hardware.
  20. I think that would be the best overall direction, since it would open up the doors to a much broader or agnostic VM storage possibility. Right now I think its still a bit away.
  21. QNAP's new QTS-Linux combo NAS provides a unique feature set that is geared towards forward-thinking SOHO users, storage enthusiasts, and IoT (Internet of Things) developers. By supporting the open-source platform that Linux embodies, QNAP has taken steps towards integrating private servers with Internet of Things (IoT) solutions and smart devices. Besides its IoT features, the TS-453A is a fully functioning 4-bay private server that utilizes all of QNAP's QTS applications, empowering users to easily virtualize their storage operations. QNAP TS-453A 4-Bay NAS Review
  22. The Samsung EVO 850 offers lots of capacity options, has great performance and is pretty cheap.
  23. Most solutions that work with NVMe use mirroring or other data protection schemes at the application level instead of RAID. Also I'm nearly certain you don't have NVMe slots. Its a custom server option that you have to go out of your way to select. We just went through this with a custom-built R630 for NVMe testing.
  24. The shape of the connector is the same, yes. Electrically it is completely different. The Dell server must be equipped to support NVMe drives from the factory. For models that support NVMe packages, Dell will sell one with X number of SATA/SAS bays, or fewer SAS/SATA bays with a mix of 4 NVMe slots. They look the same, but for all intents and purposes, its all different on the inside. Different drive backplane, different cabling, different motherboard connection/adapter. There is no way to convert a server to support 2.5" NVMe. If you want NVMe in a server not designed for it, the PCIe AIC option is your only route. With VMware, there is no option to RAID. There are no hardware RAID NVMe options on the market that VMware supports. VMware also doesn't do software RAID.
  25. NVMe drives can only be installed in a slot that supports NVMe. It sounds like you installed the 2.5" NVMe Samsung 1725 in a SAS/SATA bay. Further, PERC and the pass through support is for switching between RAID and HBA mode for a SATA/SAS SSD, not NVMe. No traditional RAID card supports managing NVMe SSDs (nor would you really want one to). Most NVMe SSDs are capable of saturating one full PCIe slot already, so trying to route multiples through one card would bottleneck them terribly.