lecaf

Member
  • Content Count

    13
  • Joined

  • Last visited

Community Reputation

1 Neutral

About lecaf

  • Rank
    Member

Profile Information

  • Location
    Brussels
  • Interests
    Lena, Windows, RAID, iSCSI, hyperV, performance, security, malware, iOS
  1. lecaf

    RAID 5: Writes faster than Reads ?

    Yep disabled write cache and performance clearly drops...worse than a standalone drive (these numbers scare me are they normal?) Can'see much of a difference, but true "Direct" is LSI recommendation. m a r c
  2. Hi I have 3, 3TB WD Red drives on a LSI 9260CV in raid 5. Read = always read ahead IO Policy = Cached IO Write = Always write back OS= Win 2012R2 latest, firmware Drivers lastest I did some benchmarks and I can only explain the sequential results. The rest I dont get it, Raid5 is supposed to have slower writes due to parity calculation. The LSI card has 512K cache and for sure it influenses the results: the numbers get smaller as the ratio cahe/file size changes. While this is normal, there is always 50% more throughtput for random writes and this is consistent whatever the file size. I would expect that ratio to drop also as the test file grows bigger (if the cache was the reason for this strange performance). Here are results for a tiny file that fits entirely into the boards cache, so numbers reflect PCI transfer not disk perfomance. What did I miss ? m a r c
  3. lecaf

    which RAID is suitable?

    Cheapest T/€ seems to be 3T drives. I consider WDC caviar green (cheaper equivalent might be better) For 10T in Raid 5 you would need 4 disks +1 Parity = 5 disks or 12T space Considering green reliability I'd go for raid 6 = 6 Disks or 12T space + decent controler that would be estimated cost 6x€100 + 1x€500 = €1100 or $1500 another option I would consider woudl be a Jbod of 5 disks and a ZFS sauce like Solaris or freeBSD (lower price of controler) The fastest mechanical drive you can afford (or two in RAID 0) As for SSDs maybe 4X256GB = €600 extended as single volume or in RAID0 (depending on controler availability) But of course as the other replies mentioned: what you want to do with them? Pro ? soho? enthousiast? OS used ? and what is the budget? m a r c
  4. Why using cheap 40G ssds on sata not a viable solution ? http://www.amazon.co.uk/Intel-Series-9-5mm-SATA2-Solid/dp/B004U8TBJY/ref=sr_1_2?ie=UTF8&qid=1396271775&sr=8-2&keywords=ssd+40gb m a r c
  5. powers issue ? maybe the PSU cannot cope with controler +drives, just a sugestion...
  6. lecaf

    Safe way to upgrade raid 1

    Duncanc is right it is always better to work on backed up data...but well in real life... If you have 4 sata ports you could also add the 2 new disks, create a new mirror disk with ICHR (not windows volumes yet) and use windows dynamic disks to mirror the content of raid.old onto raid.new Once the windows mirror is done, bring offline raid.old and break the mirror. Expand the volumes if needed. The main advantage is that the system will be online during the procedure. The disadvantage is that it is slower and it is a bit tricky if the raid disk is also the boot drive. M a r c Ps you also end up with a dynamic volume but as I use only windows I cant say it is a real con.
  7. lecaf

    LSI 9260-16i & WD Red Performance

    Hi I got a 9260CV with 3 Reds seen this too, Try forcing the cache to writeback don't use the (No Write Cache if Bad BBU) I think this setting bugs sometimes. m a r c
  8. > Please note that striping is very very dangerous (NOT spelled "stripping") Hmm maybe I've been to too many strip clubs lately ...and stripping is dangerous you can catch a cold, it's winter time after all. > I dispute that claim chiefly because: ... wear ... I would agree with your analysis if all hard disks were born equal against wear. I've yet to see any array (Raid 1 or 5 or 6 or whatever), where all drives broke in the same time period. The first can die in months the next in years... > I don't exactly know where all this fear-mongering about RAID 0 arrays originated. In the old days had a Athlon 64 with a raid 0 as a boot drive (windows 2000 upgraded later to XP), while performance was superb, one ugly day I did lose the array, and my OS was a goner. Wasn't a major blow as no data was irreplaceable (did plan for that), but re downloading and re-installing Gigs of Steam games is time consuming and no fun. So I guess my fear-mongering origin is ... my own personal experience. For an enterprise, raid 0 (database logs a good example) yes I'm for it, as long as disaster recovery is planned and down time mitigated. Don't think that was the scope of Jonas question. m a r c
  9. Hi what do you mean by Software RAID ? If it is by windows dynamic disks the answer is no! only mirror (at least I was able to do it back with win 2003) If it is a Mobo chipset then most probably you also have drivers that you can load during the windows install. Then windows will see the drives as one regular drive and will install on it (first create the drive in the controller BIOS) Please note that stripping is very very dangerous...performance wise you're better off with a cheap SSD. m a r c
  10. Hi don't know exactly what your budget and needs are, but this would be my dream SQL box. - Boot/OS/Binaries/pagefile/crash dump : 2 mechanic HDD in mirror (Can be slow you only boot OS once. Pagefile swapping might be slow but with proper amount of memory allocation there merely be any.) - Raid (0+)1 of SSDs for SQL logs - Raid 6 of 4 15K SAS for SQL databases Now if the database data are small enough: - Raid 1 of SSDs for data - raid 6 of 4 10K SAS for BLOBs/File stream storage (optional) Hope that gives you ideas m a r c
  11. lecaf

    Encryption

    Depends on the motivations why do you need to encrypt and how to. I you use full disk encryption on a NAS, your data is encrypted but most probably the key is known by in the OS (unless you would like to enter the key at each reboot). This encryption scheme would safeguard you from someone stealing a disk but not from someone stealing the all the hardware. Performance impact typically is minimal. If you encrypt client side (volume or file encryption) then you must find a way to share the key to all devices accessing the data, and there will be some performance impact (depending on what sharing protocol you use - SMB is the worse) If you only need to encrypt a few files then a simple freeware like AES Crypt could do the job.
  12. I'd like to see if Shogun2 Total War loading times are faster using this array. lol In fact what you could do are some ridiculous comparisons, of this ludicrous array against a single sata mech HDD. Some ideas: Windows install time on a VM, data copy time. Sure some business related benchmarks of journaled database would be more serious and professorial looking, but...I already know what the results would be. What I like to see is the fun angle. m a r c
  13. Hi about RAID rebuild this is a known "feature" of windows...(you didnt mention any hardware raid so I assume it's Windows doing the RAID). MS did a lame implementation of rebuild, it rebuild ALL partitions (volumes) at the same time, so if you have 4 partitions the disk heads have to read/write re-position 4 times for each block. that can explain why you needed a week. Moreover this "feature" has an other drawback: disk will start dying if you rebuild often. One question though how do you explain 2MB seq score... seems amazing compared to others...is it Memory caching? (that could explain write being faster than read in later tests). I aint got nothing against big memory caches it's just you better know it to equip your device with UPS or you ll miss data in case of power outage. Thanks for the test, its he first challenge I see for WSS and it seems to perform honorably (I would have like some ISCSI tests too). le