Jan Chu

Member
  • Content Count

    42
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Jan Chu

  • Rank
    Member
  1. Jan Chu

    48 tb server help

    Raid60 is nice enough. But what do you do when all that storage is used ? How do you exspand ? what are the plans here.... With mdadm-lvm you can make your pool of raid6's and be flexible when you want to exspand. I know am aware that he can replace every drive in his raid60 1 at a time per raid6 (or 2 if he has GIANT balls ) and when all drives are replaced he can exspand his raid60. But that is not that flexible. Offcause he wouldnt get the performance with a pool of raid6 in lvm because of the lack of stripe. Question: What kind of performance do you need ? What type of workload do you exspect ? (small files / big files , seq / random , read/write ) Is performance something that should be though of ? I to was a firm beliver in HW raid, but i must say, i love my SW raid. and it performs very well (granted no BBU, so no write back) But im not in it for the performance, only storage and flexibility. //Jan Chu
  2. Jan Chu

    48 tb server help

    The thing is, you propably want all those nice drives to be seen as one drive on your server. So you dont have to split data. Dont know about WHS, but i use linux with software raid and LVM on top. That does the trix. Multi raid sets, one big LVM you can read more about my setup in this thread (im about 6 reply's down) But as the other guys mentioned. The rebuild times on 6drive 2tb+ is awfull, even with HW raid. I did loose my 6x1tb raid 5 once (had backup, so no biggy) because of disk faliure while rebuilding. So with 2tb drives, you have "about" double the rebuild time, but hey, you can accept one more disk faliure To sum up. Less drives in each raid6, combine logical on top. And remember your backup.... what if the house cought on fire ? //Jan Chu
  3. Jan Chu

    My 60TB Build Log

    I also have a small filserver which is running linux with software raid and LVM2 (more on this later.) I did however look at opensolaris, but the problem i saw was that it was not possible to exspand an RAID-Z (z=1 (raid5), z=2 (raid6), z=n (n parity drives)). So the only way to exspand your pool was to create a whole new raid array and then add it to the pool (and then push some other stuff out of the pool. I am however in love with zfs but as my setup is rather small i cant be using this exspand strategy (which big setups could be using). So i went for an easy (and cheap) setup: OS: Ubuntu (any linux distro would propably do) Onboard: 7xSATA-II RAID card: Intel RAID Controller SASUC8I (bringing 8xSata-II without SAS exspanders) Disks: 2x1TB WD Caviar GreenPower (OS – Raid1) 5x2TB WD20EADS (raid6) All my raid is linux software raid, after a few corrupt controllers from Adaptec, 3ware and Promise, I desided I needed to be “hardware independent”, so the only reason I used my raid card is to get all those nice cheap SATA-Ports. Storage: From the ground up, I have created a partition on each 2tb drive using the whole disk. I now build my RAID 6 on these partitions (linux software raid cant take raw disks, it only works on partitions. It has its ups/downs). I use this partition as a PE device on LVM and in LVM I create a disk, which ive placed EXT4 on, and mounted. I can now exspand my already created raid volumes, and I can add raid volumes to LVM and remove other opsoleded raidsets (2tb is not mutch in 3 years With 5disk raid 6 using WD20EADS im getting ~160MB/s writespeed, and higher READ. So 1gbit network is not a problem for me. A Thing with LVM that might interest you…. Lets say you make your 2x15disk raid6 in linux software raid. You now have 2 sets which you add to your LVM, you now make your Logical Volume in LVM, and tells LVM that you would like to make sure that LVM places data on 2 different PE devices… (RAID1 in LVM). Now you have your failover  I do however not know if you can convert from raid1 LV to nonraid LV (when you need more than 15 disk raid 6 can give you. But some google might help you there. //Jan Chu
  4. Jan Chu

    Velociraptor block errors.

    i seemed to have misunderstood a something. The drive does NOT remap. The controller does. So the amount of remappings is "not bound". How many block errors before i can throw this disk in the trash ? //Jan
  5. I have 2 veloci raptors in raid1, id1: 17 Block errors (aprox 1 more per 3-5 days.) id2: 5 Block errors (havent reported any in a few months.) Q1) The drive remaps the block. How many remappings can the drive handle ? Is it about to die with 16 block errors or can i sleep "safe" until it reaches a few thousands..... //Jan Chu
  6. I wouldent use raid 5 alone for that kind of drives... think of your rebuild times when a disk dies.... what is that window ??? And in that period you are not safe if another breaks.... Raid 6 for me, or raid 5+1 regarding performance, i gues it depends.. what is your load patern ??? Seq / random ? read/write ? .... if your disks can fill your network connection, then its no problem. //Jan Chu
  7. well.... i have an ups, and also battery on the controller. But that wont help me in the event where my PSU dies, or maybe BSOD where my server is set to reboot ? But thats what i get for using MS on server i gues Naaahh havent had one BSOD on my windows servers for 3-5 years... so thats propaly not gona happen (fingers crosed) //Jan Chu
  8. in the following link you hear how the intel ssd's works, and what they are planing for the next version: http://intelstudios.edgesuite.net/idf/2009...9_MEMS004/f.htm very interesting stuff.... Around (13min-16min) i think :S .... You can hear that there is NO BBU of any kind on the current X25-E/M. But is planed for the future.... So with this in mind you should not Enable write cache on the SSD's and think your data is protected when written.... Also it seems that the Flush command is not functioning properly (on all controllers(First post)) but this is still to be tested. Sidenote: It seems that the OCZ Vertex 2 will have BBU?? (http://www.anandtech.com/storage/showdoc.aspx?i=3702&p=6) but in the same article it states that intel also has this form of BBU (wich they might have, but is not used to secure data in cache (acording to Intels own developers (link in this post))) So my sitution is: 1) I've enabled write cache so i dont trash the drives sooner (rather than later) 2) I've stop seeing the array as secure, and have make backup rutines and recovery strategies, so my penalty in case of integrity fail i can restore the data in hours, rather than days...... So far im happy.... the drives are fast with write cache... but i know that some day they will fail on me (the setup dictates it...)... then i might not be so optimistic about the recovery rutine //Jan Chu
  9. When will the controller remove content from the cache ? Does the disk make a "callback" when the flush cache command is complete ? i would love to hear your exsperiencexs with the x25 disks. I have several of the x25-m and dont know if it is safe (data integrity). //Jan
  10. Hello, i would like to hear how a RAID controller handles write cache on a drive. Lets work with the following setup: - RAID controller with a BBU (battery backup unit) - enabled write cache on RAID controller - RAID level: whatever - Harddisk with onboard cache where Write cache can be enabled. My queuestion is: Should i use the onboard write cache on the drive? Does the RAID controller issue Flush Cache commands, and only remove things from own cache when drive has garanteed that its been written ? Some would say: happy fellow the onboard write cache, you have write cache on the controller! But lets think SSD's.... here the onboard write cache is a part of the wear leveling algorithm, and by disabling it, you end up with short lifespan. So i would like to know if it is safe for me to use SSD's with enabled write cache on a RAID controller with BBU and enabled write cache? Will my data be safe in the event of (power loss, BSOD.... and so on). //Jan Chu
  11. Depending on your application load, that's not too bad. We used to routinely kill SSD's in testing in that timespan. Some go only a few weeks. I gues this is without drive caching ??? How long time did you get with caching ? (and what models did you test with) also, how did your drives fail ? is it normal for the drive to have a "silent death" ? Or should it cast errors when trying to write to it (I would think the later) //Jan Chu
  12. looking at the compatibility list i can indeed se that x25-m is supported: http://www.adaptec.com/NR/rdonlyres/421154...es5_LowPort.pdf I will try and contact Adaptec for an answer... They should have a better idea of how things works than me. My idea is, that if i use Battery and write cache on the raid controller, then the controller wil issue the Flush commands, and only remove things from its own cache when it KNOWS that the drives have acctualy written the data (and not just cached it internaly) //Jan Chu
  13. Jan Chu

    Cable Connector Quiz

    10/10 =) it was way to easy.... there should have been WAY more pictures =) but it was fun.... bit to easy... bit still .... fun //Jan Chu
  14. my bad with the link, it seems im getting blind and cant se the difference between E and M =) As far as i can se the difference between the two controllers are that the Z series have zero maintenance cache protection (no battery, but Super Capacitors + flash memory to write Cache to). So "i gues" it is supportet I havent tried and actual testing of my own, but i do have the drives working on the controller. So they are able to use the disks. But all this talk about not honoring Flush commands makes me scared of what might happen if it is true =) Loose one whole array in one swoop. But if intel cant make it work in the x25-e whats the chances that it works in x25-m ?? One might hope that G2 works better than the original x25-e, but i cant find any mention of a fix in their firmware updates. Right now i have disabled the cache, but i would rather not. I think i just killed a bunch of Vertex drives because of shortend lifespan with disabled cache, high IO, and 6month in production...... They write to disk, read from disk.. No problems. :S other than the fact that is not the same data as written when reading. So they are pretty mutch useless now. Im fearing the worst with the intel drives..... 6month is a bit short for my taste... Ill try and contact Adaptec to hear if they have anything to say, what their recomendations for using the SSD is (on/off write cache on disk). But as i understand it, the commands are pretty standart and are the same in IDE,ATA,SATA, SCSI, SAS ? //Jan Chu
  15. Hello, Ive gotten myself a couple of Intel x25-m G2 160GB SSD’s and would like to use them in a RAID-10 setup with an Adaptech 5805. I know of course that I will not be the happy owner of TRIM support, but with the G2 it should not that bad performance. The problem: It seems that the drive dosent Honor the FLUSH CACHE command. ! ! ! http://opensolaris.org/jive/thread.jspa?threadID=121424 http://www.mysqlperformanceblog.com/2009/0...t-transactions/ If you look at the Tech document from Intel, it states that the drive does support Flush cache: http://download.intel.com/support/ssdc/hps...l34nm322296.pdf Can anyone else confirm if im correct about the x25-m g2 not being able to support flush commands? If this is true, then its not even safe to use the drive with enabled drive cache, and using a RAID controller with battery, because the drive could still be holding information in the event of power loss. So the only scenario it can be used is with cache disabled (wich gives me only ~1.000 IOPS with IOMeter 4k random write (oppose to 12k IOPS with cache enabled.) Also, its been stated that the SSD’s wear level algorithms are dependent on their cache, so by disabling the cache I will be shortening the length of my drives lifetime (how short it may be already). What should I do to get the best from these devils ? It seems to me that my choices are: A: Live with a drive that dosent garanty data integrity (in raid), and still have performance. B: Disable cache, 1/10 rnd write performance, and decreased lifespan //Jan Chu