Maxtor storage

  • Content count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About Maxtor storage

  • Rank

Contact Methods

  • AIM

Profile Information

  • Interests
    Electronics, computers (personal or enterprise), health and fitness, arts and drawing, conversing.
  1. I can tell you that ZFS is pretty stable and just as fast on Linux because I tried it recently. I still prefer ZFS on Illumos (Open Solaris Fork) or Oracle Solaris because it is proven mature, Solaris isn't messy like Linux for storage/network operations, and because Solaris has an incredible debugging tool called DTrace that will analyze anything on the server. I checked those comparisons, but I wouldn't take that Postgres test to the heart because; 2x Xeon 5450 (V0 (code name Nehalem), old obsolete 1,333MHz Bus, Dual Channel DDR2 most likely as ECC/Registered (adds some latency), 3GHz 4-core (no HT), 12MB L2 cache) Vs. 1x i5-2500k (V1 (code name Sandy Bridge), QPI 5GT, Dual Channel DDR3, 3.3GHz 4-core (8 virtual), 6MB L2 cache) I am actually thinking of playing around with Postgres and its settings on Solaris. To be honest I never used it, but I am intrigued to test my machines, and also test it under Linux to see if I also hit a problem. You can always install Solaris on the Dell server and see if you also have performance issues there with your settings. The Solaris OS is very easy to manage and I'd be glad to set it up quickly with you. As for the Dell Perc controllers, I doubt there is an actual performance difference other than available RAM on the card to push for more write-back buffer capacity. It uses the exact same CPU and such. Either card will be just as incredibly fast.
  2. Well, at least according to that quick PostgreSQL benchmark the Dell is much faster than the Mac Mini as expected. For reference, I ran the same tool (PostgreSQL 9.4 for Solaris) on my ZFS storage server and got; With Hyper Threading Enabled; With Hyper Threading Disabled; Higher ops/sec = Better (Operations per second) Lower usecs/op = Better (Latency per operation) Obviously my machine is insanely fast because it is using ZFS with system RAM for write-back and a regular LSI SAS card for disk drive control. I can re-test it with an LSI RAID enabled card and see what the figures are with the card's own memory for write-back instead. As far as HT and its impact on just storage operations, I have always found it to have abysmal to zero impact. This holds true even for ZFS on my tests (ZFS uses main system CPU). The attachments show that performance actually degrades by a tiny bit with HT disabled for that tool. Though I am not an expert on HT nor do I make threaded applications, I have seen it degrade application performance to a small noticeable amount in the past, but that was years ago in my experience. Currently with HT enabled and the applications I use their performance will either remain the same, hardly go down, boost it, or allow for more CPU load than with HT disabled. In my case, leaving HT enabled is beneficial. My storage server specifications for these tests are; Oracle's SunOS 5.11 (Solaris 11.1) Single Intel Xeon E5-2620 v2 (2.1GHz 6-core, 12 virtual with HT) 64GB of RAM (16GB x4) DDR3-1333 Low-Voltage DIMMs as quad channel LSI HBA 9207 card (LSI SAS-2308 Chipset) LSI RAID 9260 card (LSI SAS-2108 Chipset) 6x WD Raptor (SATA, 1TB, 10K RPM) in ZFS RAIDz-2 (ZFS's RAID-6 equivalent). These Raptor drives are the current latest for desktops/power users/workstations. They have newer SAS versions for servers, but I am very satisfied with these! NEW: ZFS with no write-back and LSI 9260 with 7x WD Raptor in RAID-5 with write-back; Your Dell Perc 710P RAID card is pretty high end as we can see from the numbers you posted. My card has an LSI SAS-2108 Chipset with 512MB of DDR2-800. The Perc 710P has an LSI SAS-2208 with 1GB of DDR3 (according to Dell).
  3. This is seriously strange. That Dell has excellent specifications. Something is definitely not correct somewhere, and we're not able to spot it by communicating through a forum. I am whilling to dedicate time and explore this with you if we can both set time for it. I suggest Skype with it's Screen Sharing for me to view, so that we can revise settings and go much deeper. It'll be a lot faster than trying to get help through forums at this point. So, let's get to the bottom of this "Dilemma". Send me a private message though here, if interested, to get started.
  4. SFF-8087 Raid Ports

    The expanders are cards. The stand alone ones you normally put on a PCI slot for placement, while the backplane versions are smaller and are screwed onto the back of a backplane. Be careful, some of these do not have a SFF-8087 to SFF-8087 cable for connecting the main card and expander together, or unless you already have one.
  5. SFF-8087 Raid Ports

    This is due to the SAS protocol and how it was designed Vs. SATA. In SAS, the addressing is not bound to a link per device like SATA is. In your case, those 8 ports can address up to 240 when connected into expanders (typically in storage backplanes), or daisy chaining storage arrays. Keep in mind that your latency and bandwidth is capped to the physical port limits of the card. For you, a maximum of 48Gbps.
  6. Samsung SSD 845DC PRO Review Discussion

    I am greatly impressed.
  7. Pricing is still an issue with flash, specially when it comes to backup storage or storage servers that read/write blocks from 64K+ at which HDDs do well enough at. I understand that flash is not the same, but at least we'd like more reasonable prices at high capacities. For most of us, a combination of SSD/HDD/RAM is the way to go at the moment.
  8. RAID 5: Writes faster than Reads ?

    UPDATE: The high write numbers you get come from having Write-Back enabled on the array while you expected much lower figures. Caching not only accelerates writes but also masks additional write delays (such as parity) from applications by I/O staying in System RAM/Controller RAM/Drive RAM and then writing data out of sync instead of truly writing each I/O to the drive/s immediately (Write-Through). The cache is still used for queuing with Write-Through, but will not report a complete I/O write until it is truly written to disk. If a power loss happens with Write-Through, data stays consistent to applications because whatever was buffered was known to not be truly written. It is okay to use write caching as while a battery is present to keep data alive that hasn't been flushed. In the case of a RAID controller, a BBU. In the case of a system with a standard controller, a UPS on the system. I too have seen writes being faster than reads on some block sizes. It could be the controller being optimized in such a way, but I am not completely sure. Just a minor note on your single drive testing; you most likely had write cache enabled (same as Write-Back) on the single drive while you had Write-Through (same as write cache disabled) on the RAID-5 Array. The comparison is not appropriate in this case. You can check this out yourself whether you have Write-Back or Write-Through on the array and by checking the Windows setting (Device Manager > Disk Drives > Properties > Policies) being changed by the controller. If you have Write-Through on the array, you should disable write cache for the single drive you test against. NOTES: I only have two WD Red 3TB drives and can't test RAID-5 on them. For my main tests, I used latest generation 1TB Raptors. System configuration: 1x Intel Xeon E-5 2620v2 (Ivy Bridge) 4x 16GB ECC Reg LP 1,333MHz DDR3 @ 9/9/9/24 (Quad Channel) LSI MegaRAID 9260-8i (4 of 8 ports connected to the SAS expander) 1x WD Red 3TB 1x WD Raptor 1TB 3x WD Raptor 1TB (RAID-5) 4x WD Raptor 1TB (RAID-5) 5x WD Raptor 1TB (RAID-5) 6x WD Raptor 1TB (RAID-5) P.S: I might just also post benchmarks from Solaris ZFS as RAIDz (RAID-5 equivalent) to compare Vs. traditional parity.
  9. Dell EqualLogic or SUN ZFS Storage Appliance?

    I don't doubt your experiences with ZFS in the past. I have read its past updates and some of it was a horror to think about if something failed. I started to use it from v28 and beyond. But as I have said, Solaris is such a mature operating system and I wouldn't settle for less right now. It is a pitty of what happend to Sun Microsystems. At least most of the original developers are at Nexenta and Joyent (SmartOS) with budget to continue.
  10. Sadly, the issue might be that companies don't want to invest in this due to patent royalties or just low demand for the investment return. The one other device/drive that I know of that is current generation is the STEC ZeusRAM (3.5", 8GB, SAS). The other newer one from the same company is the STEC s840Z (2.5", 16GB, SAS).
  11. RAID 5: Writes faster than Reads ?

    OI Policy on the controller needs to be set to " Direct ". This will stop using the controller RAM to cache/buffer. You also have Write-Back, which also enhances performance but is the more common setting to be left enabled. If you change that setting to Write-Through, then you get 100% of drive performance on all writes (unless you also disable the small onboard cache on the drives).
  12. Dell EqualLogic or SUN ZFS Storage Appliance?

    Though this is an old thread, I thought I'd post some of my experiences. As for ZFS, I wouldn't settle with anything outside of Solaris (Oracle or illumos) just yet. Solaris is the most stable operating system for it and ZFS is at its best in it. There will be a day when Linux ZFS finally matches it, but that is not yet the case. The caching system in ZFS is incredible and gets faster with better hardware. ZFS itself doesn't care what hardware changes you have as while the drives (Individual/Mirror/RAIDz or SLOG drives) do not fail or get changed after exports and before imports. If you want to manage and script everything yourself, go for a clean illumos fork. Otherwise, go with a paid solution like Nexenta that does everything for you, or least almost all is already implemented with support.
  13. Obviously RDMA protocols will defeat TCP based protocols on everything you throw at it in workloads (even at saving more CPU cycles). I have quite a bit of experience with InfiniBand networks used for storage data delivery. However, most of my experience is with Solaris ZFS (Oracle or illumos) storage hosts, VMware's vSphere, and Linux using IB's SCSI RDMA Protocol (SRP). If you're going to go with a Windows Server 2012 and beyond, you want to use at least SMB 3.0 Direct (RDMA version of SMB) which is 100% supported and encouraged by both Mellanox and Microsoft for InfiniBand over the other RDMA protocols in Windows. The RDMA setup will shine over your traditional TCP based 10GbE networks, specially in latency. This will always remain true unless you use RDMA in Ethernet, which "can" match IB's RDMA (but pricing still remains higher). The pricing model of QDR InfiniBand is a lot cheaper than 10GbE as well if you don't mind a bit more management setup with InfiniBand. I don't have a final opinion on your hardware setups but I will gladly post my system specs and performance figures.
  14. Of course, their laptops is the only thing i'll ever consider buying, now I don't have to buy two laptops.
  15. I don't think that is the case, I would consider buying one of their laptop computers and to be able to run both operating systems is very nice, I don't have to go and buy two completely different laptops just because I want the different platforms when I need them. Besides, Apple's laptop line of computers are very well designed and thin, I don't care much about their weight but they are just very well built laptops. Well my point was that Apple wants sales and if their computers can get people to switch over and still be able to run some of their Windows applications on their machines then that is a benefit to them and the customers, also they can have customers that like to have all sorts of operating systems and they are fully supported under it's hardware, which I may add that I'd love to have Windows and Mac OS X in one machine running natively and switch to the operating systems when I need to do a specific task.