Jump to content


Photo

JetStor SAS 616iSD 10G iSCSI SAN Review Discussion


  • You cannot start a new topic
  • Please log in to reply
7 replies to this topic

#1 Kevin OBrien

Kevin OBrien

    StorageReview Editor

  • Admin
  • 1,428 posts

Posted 17 September 2012 - 02:39 PM

The SAS 616iSD 10G is part of an extensive line of block-level storage arrays offered by JetStor. The iSCSI 616iSD offers up to 48TB of storage via 16 3.5" bays in a 3U enclosure. JetStor features dual redundant active/active RAID controllers and four 10GbE ports, two per controller. Each controller is powered by an Intel IOP342 64-bit Chevelon dual-core storage processor, and includes features like parity-assist ASIC, iSCSI-assist engine and TCP Offload Engine (TOE). Should 48TB prove insufficient, the 616iSD can be expanded with four JBOD shelves (SAS716J) for a maximum capacity of 240TB per array.

JetStor SAS 616iSD 10G iSCSI SAN Review

#2 dilidolo

dilidolo

    Member

  • Member
  • 51 posts

Posted 17 September 2012 - 04:36 PM

The SAS 616iSD 10G is part of an extensive line of block-level storage arrays offered by JetStor. The iSCSI 616iSD offers up to 48TB of storage via 16 3.5" bays in a 3U enclosure. JetStor features dual redundant active/active RAID controllers and four 10GbE ports, two per controller. Each controller is powered by an Intel IOP342 64-bit Chevelon dual-core storage processor, and includes features like parity-assist ASIC, iSCSI-assist engine and TCP Offload Engine (TOE). Should 48TB prove insufficient, the 616iSD can be expanded with four JBOD shelves (SAS716J) for a maximum capacity of 240TB per array.

JetStor SAS 616iSD 10G iSCSI SAN Review


Latency is too high.

IOPS is important but latency is even more important, especially for certain workload.

#3 Kevin OBrien

Kevin OBrien

    StorageReview Editor

  • Admin
  • 1,428 posts

Posted 17 September 2012 - 04:54 PM

Latency is too high.

IOPS is important but latency is even more important, especially for certain workload.


Which area is it too high? We showed both "highest IOPS, but highest latency" performance, as well as scaled performance to hone in on optimal loading.

#4 dilidolo

dilidolo

    Member

  • Member
  • 51 posts

Posted 17 September 2012 - 06:41 PM

Which area is it too high? We showed both "highest IOPS, but highest latency" performance, as well as scaled performance to hone in on optimal loading.


Avg 4K/8K. I expect to see sub 10 ms all the times.

#5 Kevin OBrien

Kevin OBrien

    StorageReview Editor

  • Admin
  • 1,428 posts

Posted 17 September 2012 - 08:37 PM

Avg 4K/8K. I expect to see sub 10 ms all the times.


In our 100% R/W workloads we went for peak throughput under a 16T/16Q load (256Q effective), which is why you are seeing the higher latencies. In our mixed workloads we scaled it to show where the optimal range is, as well as where latency increased dramatically.

#6 vaidab

vaidab

    Member

  • Member
  • 2 posts

Posted 12 February 2013 - 09:06 AM

Kevin, I'd like to do similar tests on some other SSDs, could I get your filebench or iozone scripts?

#7 Kevin OBrien

Kevin OBrien

    StorageReview Editor

  • Admin
  • 1,428 posts

Posted 12 February 2013 - 10:07 AM

Currently we use FIO for our benchmarks, although depending on your setup you might have better luck with IOMeter. From this website grab their icf file listed:

http://technodrone.b...ur-disk-io.html

It includes a quick 8k 70/30 profile already setup which you can use to apply to your device.

#8 vaidab

vaidab

    Member

  • Member
  • 2 posts

Posted 13 February 2013 - 09:27 AM

Thank you very much.



0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users