Jump to content


Photo

iXsystems Titan 316J JBOD Review Discussion


  • You cannot start a new topic
  • Please log in to reply
10 replies to this topic

#1 Kevin OBrien

Kevin OBrien

    StorageReview Editor

  • Admin
  • 1,449 posts

Posted 17 December 2012 - 01:42 PM

The iXsystems Titan iX-316J is a 16 3.5" bay, JBOD storage expansion shelf. The JBOD has become a permanent fixture of the Storage Review lab, enabling us to directly connect SATA or SAS drives to a host compute system via LSI 9207-8e SAS expander. The iX-316J can be used in a variety of use cases, ranging from accepting up to 64TB of SATA drives, all the way up to the speedier 2.5" 10K and 15K drives, should the user choose to go that route. In this review we look at three different sets of hard drives, clearly illustrating the performance vs. capacity trade-offs that occur with modern enterprise hard drives.


iXsystems Titan 316J JBOD Review

#2 chris.wilkes

chris.wilkes

    Member

  • Member
  • 3 posts

Posted 17 December 2012 - 10:04 PM

Seriously, Raid10 how, software Raid10? What software Raid. Can you say it is "Enterprise" using software Raid? SATA & Enterprise in the same context is serious mistake. A single SATA can hang the entire SAS bus for 90 seconds causing major issues worse than a drive failure. Spending a few extra bucks on fat SAS drives is well worth it. Don't bother with SATA, they are a joke.

#3 Kevin OBrien

Kevin OBrien

    StorageReview Editor

  • Admin
  • 1,449 posts

Posted 18 December 2012 - 08:39 AM

Seriously, Raid10 how, software Raid10? What software Raid. Can you say it is "Enterprise" using software Raid? SATA & Enterprise in the same context is serious mistake. A single SATA can hang the entire SAS bus for 90 seconds causing major issues worse than a drive failure. Spending a few extra bucks on fat SAS drives is well worth it. Don't bother with SATA, they are a joke.


Software RAID is used throughout many enterprise appliances and appliances (open up many NAS and SAN devices... quite a few run on software RAID), so it can't be discounted as easily as you might consider. In terms of nearline SATA, the SAS version of the Ultrastar wasn't available in a large sample size for us at the time we started building this review and the chassis specifically supports both SAS and SATA. Showing off both in use was the primary goal.

#4 chris.wilkes

chris.wilkes

    Member

  • Member
  • 3 posts

Posted 18 December 2012 - 05:15 PM

Thanks for the response, what software was utilized for Raid10? I will respectfully disagree that "Enterprise" NAS or SAN solutions use software raid. Can any examples be named? Some do, but they are highly proprietary and or hardware exclusive.

#5 mbreitba

mbreitba

    Member

  • Member
  • 5 posts

Posted 19 December 2012 - 11:29 AM

Software RAID - Lets see - Sun/Oracle/Nexenta ZFS based storage systems? Those are all software based, and most decidedly "enterprise" ready storage solutions.

Next question - Any reason to lean towards the iXsystems? Isn't this just a rebranded SuperMicro SBB?

#6 Brian

Brian

    SR Admin

  • Admin
  • 5,307 posts

Posted 19 December 2012 - 12:49 PM

Software RAID - Lets see - Sun/Oracle/Nexenta ZFS based storage systems? Those are all software based, and most decidedly "enterprise" ready storage solutions.

Next question - Any reason to lean towards the iXsystems? Isn't this just a rebranded SuperMicro SBB?


It is an SMCI box. Working with iXsystems because they were more interested in engaging on this particular review than SMCI directly.

Brian

Publisher- StorageReview.com
Twitter - @StorageReview

 

#7 dilidolo

dilidolo

    Member

  • Member
  • 51 posts

Posted 19 December 2012 - 03:14 PM

Thanks for the response, what software was utilized for Raid10? I will respectfully disagree that "Enterprise" NAS or SAN solutions use software raid. Can any examples be named? Some do, but they are highly proprietary and or hardware exclusive.


NetApp uses RAID-DP but it is software based. Unless you think NetApp is not Enterprise.
ZFS uses Software based RAID.

#8 dilidolo

dilidolo

    Member

  • Member
  • 51 posts

Posted 19 December 2012 - 03:17 PM

I thought you were going to test ZFS but turned out to be the shelf which is a re-branded Supermicro. I would really love to see you test TrueNAS.

#9 Kevin OBrien

Kevin OBrien

    StorageReview Editor

  • Admin
  • 1,449 posts

Posted 19 December 2012 - 03:46 PM

I thought you were going to test ZFS but turned out to be the shelf which is a re-branded Supermicro. I would really love to see you test TrueNAS.


The stepping stone was working with the shelf first to work into upcoming caching solution reviews, with the talks around TrueNAS already ongoing. We are hoping to have a system in soon to start testing in our test lab.

#10 chris.wilkes

chris.wilkes

    Member

  • Member
  • 3 posts

Posted 20 December 2012 - 12:17 AM

ZFS performance does not scale well with 16 spindles of fat SAS disks scrubbing and deslivering. NetApp on iXsystems, don't think so. Good luck keeping your data up time to 5 nines on any software based raid on commodity hardware. Seen too many people go down in flames trying.

#11 Walt Roshon

Walt Roshon

    Member

  • Member
  • 1 posts

Posted 22 September 2013 - 06:18 PM

In my 25 years in the industry, I've seen hardware RAID cause the most grief. I can recall numerous times where a single drive failure has taken down and entire array on a hardware RAID controller at least temporarily. I even saw it once in an AS/400.

Software RAID rules big iron in the Enterprise. It's hidden in proprietary hardware packages behind proprietary interfaces. (EMC, Oracle, IBM, HP, Hitachi, etc.) use JBODs (with varying levels of redundancy) with software RAID. Enterprise storage controllers today are mostly high end "commodity" hardware with varying levels of redundancy in custom enclosures, though they'd like you to believe otherwise. The more features and flexibility the greater likelihood of there being a storage OS running on an x86 compatible platform. Some of them even eat some of your SAN storage for their operating systems.

The worst storage disaster I ever saw was when one of the redundant power supplies in a nearly new P6000 Equallogic SAN burped and glitched both storage controllers in the chassis and corrupted an entire 40 TB array. That left a multinational corporation dead in the water for three days, while Dell/Equallogic engineers rebuilt it.

ZFS has run with multi-PB stores for mission critical apps for years with racks full of JBOD shelves controlled by Sun and Oracle servers with gobs of RAM. I have yet to see a packaged ZFS appliance from any company but Sun and now Oracle that isn't woefully under provisioned for RAM to scale out beyond a 16 TB.

Google, Facebook, Rackspace and Amazon all run on commodity hardware, I've had decommissioned Rackspace's servers and "white box" came immediately to mind.

I will agree that if you take you can a single commodity server, with a single SAS controller hooked up to a cheap JBOD full of consumer SATA drives, install FreeNAS and think you're in business. Don't cry when it flames out, but there's no reason you can't engineer large, robust, reliable, high performance storage systems with properly selected commodity hardware with appropriate levels of redundancy and any of a variety of software from various vendors. In fact, it's possible to engineer a storage system using commodity hardware that is more resilient than most dedicated, enterprise SAN/NAS storage. What you can't get is the support and fancy management software that comes with them.



0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users