• Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About thesix

  • Rank
  1. thesix

    SiI 3114

    If your purpose is testing RAID-Z, then you don't need HW RAID, and of course we can not use RAID-Z on top of _one_ HW RAID disk.
  2. thesix

    SiI 3114

    Not on Solaris (or AIX, etc. ) The syntax of your 'dd' commands also won't work on Solaris. Change them to: # time dd if=/dev/zero of=/path-to-raid-array/10GBfile bs=256k count=2560 # time dd if=/path-to-raid-array/10GBfile of=/dev/null bs=256k
  3. thesix

    RAID-Z testing

    ZFS' data buffer lives in kernel memory, which has a very limited address space in 32bit mode regardless how much physical memory is on the box. Therefore, doing large buffered writes can make your system run short of kernel memory easily, which leads to severe problems (slowness, out of memory error, system hang, etc.). To run ZFS as a file server, you should use a 64bit CPU, runing 64bit kernel. It doesn't have to be a super fast one, pretty much any x64 CPU on the market will do. As for unplugging the power, no problem, ZFS is designed for that Reinstall OS and re-import zpool, no problem. Move disks to another Solaris host (equal or newer release) and re-import zpool, no problem, even between x32/64 and SPARC. Lots of work are being done on the performance side of ZFS. I would always use the latest build of Solaris 11 (Nevada), unless I am running a production server with paid support. Lastly, this is the authentic place to ask ZFS questions: Keep your mind open and good luck.
  4. thesix

    Need help for RAID 10

    I've never seen anyone recommend RAID 01 over RAID 10. Performance wise, I believe RAID 01 can outperform RAID 10 in some cases. However, just do what (almost?) everyone else does: use RAID 10. I suppose you have read this.
  5. thesix

    What's the fastest processor available?

    I know. Because "It cannot be any type multi-core solution" is a strange "requirement", and you started with "single fastest processor available", so which "requirement" is more important? Did you see Opteron 256 is "1 core, 1 chip, 1 core/chip" ?? Yes. Why does it matter? Cost? It's the "single fastest processor available", isn't it? I specifically mentioned CFP2000 measures single-thread performance, unlike CFP_rate2000. That's why I ONLY gave you links to the top two _X64_ CPUs !! The top-10 list has absolutely nothing to do with your request, it's a FYI for all the readers. Stop complaining why people don't answer your question the way you like. Try to find the useful bits and appreciate what you got, especially when it was a OFF-TOPIC question to begin with and you failed in your own research.
  6. thesix

    What's the fastest processor available?

    Does MFLOPS measure float point performance? If so, you can check SPECfp2000 results. I picked two highest x64-based results (base, not peak) for you: 2775 - Dell Precision Workstation 690 (Intel® Xeon® processor 5160, 3.0 GHz) 2260 - Sun Ultra 40 AMD Opteron 256 / 1 core, 1 chip, 1 core/chip I don't think multi-core/chip or SMT/HT helps CFP2000 (they help SPECfp_rate2000) , so it's good to use. This benchmark is dominated by non-x64 chips, which is not surprising. Here's top-10 values and companies: 3271 IBM 2851 HITACHI 2850 IBM 2839 IBM 2830 IBM 2815 IBM 2801 HITACHI 2783 Fujitsu 2775 Dell 2712 Hewlett-Packard
  7. thesix

    batch scripting in RHEL4 WS

    Create a one-line file "newscript": cfx5solve -def "cfx batch 40.def" Assuming cfx5solve is in your PATH, check with: # which cfx5solve Then run (assuming newscript is in the current directory): # sh ./newscript If you still get the same error, the error might came from the "cfx5solve ..." command itself, you think? For example, is "40.def" a file name? Should it be in the current directory? Is "cfx" another command? Should it be in the PATH? etc.
  8. thesix

    Solaris revisited

    This is the most relevant part for StorageReview members. I mostly appreciate: 1) Easy adminitration - very well designed CLI 2) Reliability - this is the most significant point IMHO. 3) Feature-rich - snapshot anyone? There're lot to write about ZFS, I am thinking a feature article on StorageReview would be good. ZFS will be released in Solaris 10 Update 2 this month, I have been using it on my file server through Solaris Express ever since it's released there and open sourced.
  9. thesix

    Solaris revisited

    Search for '"Ultra 5" SCSI' on comp.unix.solaris , you should see may hits. I suppose quite a few SCSI adapters work fine. non-Sun SCSI disks shouldn't be a problem.
  10. Sorry to disappoint you but I don't have any experience with these cards. My experience of I/O come from performance tuning and debugging on "high end" systems (pSeries/HBA with AIX), also at very "low end" (software RAID on Solaris/Linux). I stand by my "theory" on where the bottleneck would be in your situation though. Many others in this forum have much broader experience with RAID adapters. Some of them might be willing to "put his/her neck on the line"
  11. Sorry for sounding a little dumb but I want to get this right, shall isn't a typo right? You are saying it is worth spending the money on an ARC-1120. qasdfdsaq is right I am not a native-English-speaker, what I mean to say is: I will not waste my money on an expensive RAID adapter with large amout of cache, if I know the system only (or mostly) does large (the file size is much larger than the cache size in adapter) sequential writes.
  12. In general, yes, you can use different brand.
  13. Sounds like the bottleneck will be at the GigE/tcpip and app/filesystem stack. I don't think you shall waste money on expensive RAID adapters with large NVRAM/cache, if I/Os are mostly large sequential writes.
  14. Would those be local writes or network writes? Sequential or random writes? One write at a time or multiple writes in parrallel? Are reads mixed with writes? Different HW RAID or SW RAID has different strength. Different tunings on different OS/filesystem also matters a lot. It's hard to answer without knowing more details about the I/Os patterns.
  15. thesix

    3-drive Linux software RAID5 performance

    Here's what I got from my "top" system, NOT meant to be a direct comparison, since there's nothing similar in HW/SW configuration. This is just for reference or folks' curiosity Sun W2100z 2xOperton 246 (2GHz), 2GB memory. Adaptec ASC-29320A U320 RAID-Z of 3 disks: FUJITSU-MAU3036NP-0104-34.25GB x 2 + SEAGATE-ST336753LW-0003-34.18GB RAID-Z is introduced with ZFS in OpenSolaris. If you're not familiar with it, like how it compares to RAID-5, Jeff Bonwick's Weblog is a good read. $ uname -a SunOS w2100z 5.11 snv_27 i86pc i386 i86pc bonnie++ Version 1.01 (Downloaded from $ bonnie++ -s 4096M -d /tank Version 1.01 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP w2100z 4G 54268 61 50938 16 69013 21 77427 95 152428 23 502.5 1 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 31461 97 +++++ +++ 22759 59 30883 99 +++++ +++ 27121 99 w2100z,4G,54268,61,50938,16,69013,21,77427,95,152428,23,502.5,1,16,31461,97,+++++,+++,22759,59,30883,99,+++++,+++,27121,99