dilidolo

Member
  • Content Count

    52
  • Joined

  • Last visited

Everything posted by dilidolo

  1. dilidolo

    iXsystems TrueNAS Reviews?

    I'm in Canada and we bought from Tegile directly.
  2. dilidolo

    iXsystems TrueNAS Reviews?

    Can't answer your question about TrueNAS, but Tegile has everything you list here. I think you can get clustered config with 26TB raw space with a few SSDs cache, 10Gb nic, for $20000 list price. No, it's not Windows based, it's ZFS with working dedup, and SMB 3. I worked in a large university here and we used Tegile for our VDI environment with thousand VMs, performs very well.
  3. dilidolo

    NetApp FAS2240-2 Review Discussion

    There is no reason to split the disks into 2 aggregates in this setup. You are wasting more disk space and limiting your performance. With the internal shelf only, you are not going to hit controller hard, Active/Standby is more than enough.
  4. Is this a advanced format disk? Trying to decide which 4TB SAS disk to go for my home lab. Thanks
  5. I just one 3 days ago but haven't got time to put it into my ZFS storage server. Very nice kit.
  6. dilidolo

    iXsystems Titan 316J JBOD Review Discussion

    I thought you were going to test ZFS but turned out to be the shelf which is a re-branded Supermicro. I would really love to see you test TrueNAS.
  7. dilidolo

    iXsystems Titan 316J JBOD Review Discussion

    NetApp uses RAID-DP but it is software based. Unless you think NetApp is not Enterprise. ZFS uses Software based RAID.
  8. So the SAS version is actually slower using single port?
  9. Avg 4K/8K. I expect to see sub 10 ms all the times.
  10. Latency is too high. IOPS is important but latency is even more important, especially for certain workload.
  11. dilidolo

    SAS Expander or Not?

    The chenbro uses LSI X28 expander chip. I have supermicro chassis with expander backplane using the same chip, works fine for SAS disks (I only put SAS behind it). I don't see any performance issue, the bandwidth is more than enough for what I need. (4x3Gb). The problem with expander is that you add one more layer between controller and disk, and there could be more compatibility issues. I know many SATA disks don't work well with expander, especially SSDs.
  12. I have the 3TB version, it's very fast for sequential transfer. IOPS is lower than WD black, and access time is also higher.
  13. I don't think you need to if you use whole disk. If you slice your disks, then you have to do maths, but ZFS loves raw disks.
  14. dilidolo

    Which drive to buy in my NAS?

    HNAS for home? LOL
  15. Hybrid doesn't have to be implemented on disk itself. In ZFS, you can have SSD as both write and read cache for storage pool.
  16. dilidolo

    My 60TB Build Log

    I have 30 disks in my storage server and my requirement for it is to be easily expandable. Therefore, I have a totally different setup. OS: OpenSolaris b134. Head Unit: Lian Li A70B (10 internal 3.5in disk bays) Mobo: Supermicro X8ST3-F (6 SATA, 8 SAS ports onboard) CPU: W3520 RAM: 12G Kingston ECC DDR3 1333 2*73G 15K rpm SAS mirror for OS 6*2TB Hitachi SATA connected to the rest 6 SAS ports, RAID-50 6 SSD connected to onboard SATA ports (2 Intel X25-E for write cache, 4 Corsair Nova 64G for read cache) Additioal controller: LSI 3801E external SAS HBA Qlogic 2462 4Gb dual-port FC HBA Nic: 2 onboard Intel 82574, 2 dual-port Intel 82571 Disk shelf: Supermicro 936E1 (16 SAS/SATA bays through LSI X28 SAS expander) 8*Seagate 15K.7 SAS 300G RAID-10 8*Seagate Constellation 2TB SAS RAID-50 Tier 1: Seagate 15K.7 in RAID-10, Intel X25-E for write cache, 2 Corsair Nova 64 for read cache, for VMware Datastore and Database storage through FC, Tier 2: Constellation 2TB in RAID-50. Intel X25-E for write cache, 2 Corsair Nova 64 for read cache, for application data Tier 3: Hitachi 2TB SATA in RAID-50. No read and write cache, for user files, movie, downloads, etc. With this setup, I can add 2 more disk shelves and daisy chain with the first disk shelf. Or I can connect to 4 SAS ports on LSI 3801E. I may replace the SAS HBA and get a new disk shelf with new 6Gb one as all my SAS disks are 6Gb.
  17. dilidolo

    DAS Advice needed

    If you have $4000-5000, get a supermicro chassis with SAS2 expander. Single SAS cable to external chassis and you are done. The chassis has 15 or 16 disk trays depending on model. You can also daisy chain multiple chassis if you need in the future, or if you SAS/RAID controller has 2 external SAS ports, you can conect to external chassis through different ports. If you want more resilience, you can opt for mulitpathing support. The chassis will run about $1000 and you have 3000 to 4000 for disks. I'd suggest use SAS disks. Seagate Constellation ES.2 SAS disks are very good.
  18. I have 4G fiber channel network at home, but I'm not using FC disks or enclosures, I have SAS jbod chassis connected to SAS HBA in head unit. The idea should be same for FC. Cheapest and easiest way is to get FC jbod enclosure, connect to FC HBA directly. Now, if you want multiple clients to access the storage simultaneously, you need cluster filesystem. Otherwise, depending on application requirement, CIFS and NFS may work. iSCSI can export LUN to clients, but you still need cluster filesystem for all the clients to access it at the same time.
  19. dilidolo

    Home server with 50 drive

    I'd pick chassis with 6G SAS expander such as Supermicro. You only need one SAS port to the expander or 2 ports if you want multipathing. If you need to add more capacity, add another chassis and daisy chain them. Also depending on what OS you use, some can do compression and deduplication which reduces total storage requirement.
  20. dilidolo

    VelociRaptor 600GB vs Seagate Cheetah 15k.7

    15K.7 will leave raptor in dust, I have 8 or them but I would still use SSD for OS.
  21. I have 8 of 300GB model at home for my VMware lab. I don't use hardware raid but software raid-10 in Solaris ZFS, plus 1 SLC SSD as write cache and 1 MLC SSD as read cache. Controller is LSI 1068e SAS HBA, connecting to external Supermicro chassis using 4x SAS lanes. I'm able to hit 1.2GB/s for sequential write. For read, due to the way ZFS works, plus SSD read cache, I'm able to get over 2GB/s. The test is done writing 0 using dd so it's not real world. I export LUNs through 4Gb FC to VMware ESX hosts and my VMs run very fast, both sequential read/write and random read/write are super fast. Again, a lot of the performance is contributed by ZFS when using SSD as read and write cache.
  22. So the idea is very similar to Solaris ZFS.
  23. Not bad. Is it SATA or SAS version?
  24. I've been waiting for the SAS version for a few months.