Darkknight

Member
  • Content Count

    14
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Darkknight

  • Rank
    Member
  1. Darkknight

    Raid 5 technical questions

    Sorry, I don't have topic notifications turned on any more, so I only see this topic when I stop by the forum. Win 7 x64/x86 does not have an ideal offset built it, it is simply *improved* over previous versions. The fact of the matter is, there is no magic number that will work for everyone's setup. Different hardware, different drive counts, different drivers will produce different results. I suspect that there is less volatility in this formula when using a discrete true hardware RAID card, but as I have no experience with that, I cannot say for sure. As I subsequently discovered, RAIDs are much more complicated than they look. I had honestly expected to hook up the drives, run the IRST manager, and have awesome results... That was not the case, as this thread details. Bottom line: Best practice is to properly align the partition to suit your setup. This requires prodigious use of Diskpart & ATTO. RAID 5 read speeds, when properly configured, should resemble RAID 0 read speeds - i.e. speeds scale up so that each additional drive adds a magnitude performance increase [(# Drives - 1) * single drive read = RAID 5 read speed] - There is of course overhead, so you will never achieve that, but it's a good guideline. Take for example, my final setup: Single drive AVG linear read ~ 125MB/s, 5x drive (Really 4x + parity) Raid 5 AVG read ~ 490MB/s. Pretty close if you ask me. It took nigh on 50+ benchmarks, and a crap-ton of hours logged to get it though. Write speeds are the big question mark however, and largely result from a combination of hardware, O/S overhead, disk count, and geometry. Everything helpful that I know is already in this thread. Benchmarking is your friend. Intel raid (IMO) is not as bad as people make it out to be. My results are stellar as far as I'm concerned, and I didn't pay a ridiculous amount for a hardware card to only duplicate function I already have. I have built, broken, and even rebuilt from a failed drive. I've been using it for months, and it works great *for my needs*. The procedure for manually aligning a partition is not complicated, but there are a lot of steps that need to be carefully followed. Rather than trying to copy/paste, I suggest that you use Google - "Diskpart manual align" - and read up. I don't mean that to be unhelpful, just that I don't think I'm the best authority to consult. I do use GbE exclusively for my wired links, and all tests were performed wired. 4k Sector drives are a different problem entirely. Some use internal 512b emulation, some expose the 4k sectors directly on the interface. They have a different process for aligning partitions that I don't fully understand, not having had to do it myself. I know enough from what I studied to know that sector emulation adds another level of complexity to the process that makes it very different from standard 512b drives. Final note: Your read/write speeds look very good for a 3 drive array IMO. I suppose your goal should be to make sure the raid is not the limiting factor in storage transfers. Beyond that, speed for the sake of speed that you will not be able to use seems like a waste of time. In my case, my writes were repeatedly limited when I had multiple drives concurrently writing to the array on the same machine, which happens often for me. After my last adjustments, they run about 85% when writing together, which I consider an acceptable overhead. I hope that was helpful.
  2. Darkknight

    Raid 5 technical questions

    Just a little update: I had a drive fail on me about 5 days ago. Dropped out of the array with a failed disk/SMART error (Smart Command Failed) message. My first thought was a CCTL/TLER error, but I tested the drive using Hitachi's drive tools, and as it turns out, the disk developed a ton of corrupt sectors and legitimately failed. I exchanged it today at M/C for a new unit. The rebuild finished in under 5 hours, fully successful, with no loss of data (isn't R5 cool?). Had I given up on R5 prior to succeeding, I would have lost some or maybe all of the data I had stored, at least the part that was on that drive as with other raid variants. As it stands, I not only didn't lose a thing, but I was able to continually use the array the entire time. The hardest, most aggravating part of the whole ordeal was just getting the replacement drive.
  3. Are quad & hex cores worth it? How long ago was it that multi-processing environments were the domain of servers only? Until the advent of SSD, storage subsystem performance has lagged far, *far* behind the growth in performance seen by nearly any other subsystem in a PC. Raid 0 was for many, the only way to workaround the inherent limitations of mechanical discs. So called pro-sumers have very much blurred the lines between workstation and desktop environments. Often they work out of their homes, utilizing standard build computers from the likes of HP & dell, only to discover the painfully slow process of editing videos, working on large image sets, etc. For these folks, RAID 0 (for scratch use) was exactly the shot in the arm they their work needed. For enthusiasts, R0 offers a performance level you can brag to your friends about, while treating your A-D-D need to load ASAP. Yeah, I fully believe RAID has relevance to the desktop, if you can get past the hurdles. The real question for you to choose between those drives is really do you need better random R/W performance or sequential? The VR will provide much better random R/W results due to the lowered seek time from the higher RPM. R0 is good if you have a good controller supporting independent reads & writes. If sequential r/w is what you need (e.g. large contiguous blocks of data) R0 will blow the VR out of the water. I wouldn't worry too much about 2 drive R0 failing. The VR isn't invulnerable to failure, but likely more reliable. The truth is, if you value your data, you'll already have a backup, so the longevity isn't an issue.
  4. Old topic, I know, but this saved me from buying a crap ebay adapter, which I was literally one click away from doing. First, I wanted to say Thanks for the review. Second, I looked around, and the Marvel based adapters can be had HERE for about $7+S&H. I paid $17 for two. It's not the $2.50 ebay ones from HK, but you get what you pay for.
  5. Darkknight

    Raid 5 technical questions

    Just when I'm sure I had hit a dead end, I think I stumbled on the answer. Let's all raise a glass to utter, dogged determination to do things the way you want, and the wives that believe in us no matter how many times we don't succeed! I realigned the partition manually using different settings. Win 7's 1024KB alignment *is not* I repeat *is not* ideal for every raid. Simply aligning blocks & clusters is not enough. I boosted my speeds from 50MB/s write & 350MB/s read up to 80MB/s write & 500MB/s read! I'm pretty sure I can get even more out of it. I'll have to re-bench the different cluster/block ratios again, and hand align the damn partition each time, but this is clearly where my raid was falling down. In fact, I think this is likely the single source for every ICHxR R5 speed complaint. I would post the settings I've used, but if there is anything I have learned doing this, it's that every raid is different, and you need to find the settings that work for *your* hardware by testing and research. Posting my settings will make you think that you can take the lazy way out, when in fact my settings may well make it worse for you. Hopefully, for the next guy reading this with his R5 problems, I've pointed you in the right direction and this might be able to fix it for you too. It's taken me 5 solid days of research & experimentation to find the culprit. To be fair, partition alignment was the one thing I didn't alter in my quest to get better speeds. I read over & over that the 1024K alignment was fine, and probably by the same people that insist ICH/hybrid raid = bad & software raid = fail without ever trying it, basically parroting what they've heard from someone else who did try it, but never really tried to fix it. Edit: 280MB/s avg write speed & 490MB/s avg read sucka!. Anyone who says Intel "fake raid" sucks, and you *need* a dedicated card to get good R5 performance can kiss my ICH. Victory:
  6. Darkknight

    Raid 5 technical questions

    I don't think there is a drive problem at all. I created a raid 0 array, using IRST to bench it. I'm getting 500MB/s+ write & 600MB/s+ read speeds. That's not indicative of a drive or cable problem. The problem seems to lie with the RST driver R5 implementation, I think. Been considering all options recently, ZFS, VSF, WHS in a VM. None of these are good solutions (for me) IMO. I truly have even considered windows dynamic disk spanning, just to get the volume I want. I didn't like that idea not because of the FUD surrounding it's use. You can actually use 3rd party tools, cheap or free even, to recover any files left on non-damaged disks in a spanned array, you just can't do this from the LDM. I don't want to use it because it won't balance the data usage between all the drives. Tested a different combo. 4 Drive R5 deliver 75-90MB/s writes.
  7. Darkknight

    Raid 5 technical questions

    Broke the array, testing each one, but repeat tests on the same drive are not supplying the similar results. 135MB/s Write, then 145MB/s Write... ATTO is that unreliable? None of them benched under 133MB/s, in any case.
  8. Darkknight

    Raid 5 technical questions

    Wouldn't MSM/RST app detect a bad disk? I'll break the f'ing array again, I know you're right, just hate the 16hr rebuild process. If I do manage to find & fix this issue, would the rebuild time drop proportionally to the gain in write speed?
  9. Darkknight

    Raid 5 technical questions

    I deleted the 2 partitions, one of which did not go quietly. I created a single new partition, using disk part with align=1024. Listing the details of the new partition shows it is in fact started at offset 1024k . Write speeds are still slow, and now for some reason, read speeds have plumetted. I no longer achieve ~65MB/s copy speed, and ATTO looks stupidly slow read and write. I'm not even certain that it's reliable at this point. Losing hope, need someone to throw a life preserve to me here. Giving up on it for tonight.
  10. Darkknight

    Raid 5 technical questions

    Does anything look out of place with this? I'm not sure, but I think that this means the partition is not aligned properly. It's supposed to be a single partition aligned at 1024kb, right? I checked W7s registry entry, and it's set for 1048576 bytes. If it's not aligned properly, it's possible that *all* of the benching I did yesterday is invalid, and I would get different results. What is the "reserved" partition in there for? I simply used the LDM to initialize and format the disk with as a simple volume. I didn't ask for 2 parts... Can someone who knows what the offsets are supposed to be, give me an idea if this is the way it's supposed to look? Thanks
  11. Darkknight

    Raid 5 technical questions

    All Read & Write rates list below are for sequential operations. It's a large file server, not an SQL DB, max throughput is more important than IOPs for my needs. Well, at least I'm past that point. I've been busting my .. HDDs for the past 2 days trying to come up with a good combo for the stripe & fs cluster size. I took the advice of another NAS builder and simply benchmarked my setup (ATTO) from 128k stripe & 64k NTFS cluster, all the way to 16k stripe & 2k cluster, with every option in between (there went my Saturday). When I identified the best combos for each stripe, I then used Teracopy and moved an 8GB iso to and from the array, then over the network to verify real world results. 128k/32k produces the best read results by far at a staggering 500MB/s avg read speed. The writes are an abysmal 31 MB/s however. *Write back cache is enabled* Disabled, it produced 500KB/s (yes KILOBYTE) write rates. The best write speeds were accomplished with a 32k stripe/4k cluster combo. With that, write speeds are an improved ~50MB/s, and read speeds are in the 300MB/s range, which coincidentally are both second only to the 128k/32k combo. Since this array is data storage only, and ultimately limited by GbE (125MB/s theoretical max) anyway, I felt it was better to go with the 32k/4k combo for the higher write speeds. The problem I'm having is that elsewhere on the interweb (I should really stop reading about this), I'm seeing people achieving ICHxR Raid 5 100MB/s write results. Many of the benchmarked systems are even using older, smaller, & slower, drives! W---T---F?! I don't know what else to try at this point. To compound the problem, local writes to the array are ~50MB/s, network writes are 15MB/s!. Net reads & writes to other single drives on the same system are ~65MB/s. Network reads from the array are ~65MB/s. 65MB/s seems to be a chipset limitation at this point, because the individual drives bench faster than this, but can't read/write to each other beyond that number. Pulling out my hair here. Could use some (helpful) pointers to try to solve this conundrum. I'm definitely not in a position to fork out any extra $$$ for a dedicated raid card ATM due to the extra expenditures just to get this rolling, please don't suggest that. Even if that were the only solution, I'll stick with the 50MB/s writes. The point is though, ICHxR has a proven higher throughput available than what I'm experiencing. Based on the fact that simply altering the stripe & cluster sizes I was able to achieve an incredible speed boost, I really think this must be some sort of a configuration issue holding me back from faster seq writes. Basically, based on the other hardware I'm using, I'd realistically like to see 100MB/s benched writes. That's enough to max out the capability of the drive(s) the data will be copied from, and beyond what appears to be some sort of artificial limitation putting the brakes on at 65MB/s anyway. Edit, the picture is even more muddy now: If I push the file to the array from another system, it's 15MB/s, if I pull it on the array from that same system, it's 53MB/s. Why would it matter where the transfer is started from? Both systems use Win 7, btw. Edit #2, Seems as though there is some inherent problem with CIFS/SMB that file transfer speeds are sometimes affected by the direction of initiation. There is no one-fix for this problem. There are so many suggestions from turning off advanced networking features (RSS, TCP chimney, SMB2, etc) to replacing NICs, cables, other hardware. Bottom line I suppose is, the use case that I'm having trouble with is not common, and easily worked-around. I don't prefer to leave things "broken" but TBH, I have been down this road before unsuccessfully, and I have bigger fish to fry ATM. I'd rather any potential replies focus on the R5 write speed issue. Thanks!
  12. Darkknight

    Raid 5 technical questions

    I ended up installing W7. I've spent all day considering my options, and every other option seems to only be less reliable and harder if not outright impossible to migrate out of once the array is in use. Neither of those options are acceptable just to save a few hours work of installing, tweaking, etc to get W7 to work the way I need it. With W7 (32bit)installed, the array shows up fine, and I have the option to format it. I had done a lot of research on this, but I've poured over so much information that I simply cannot remember what I had decided on as far as cluster size. My stripe size is 128k. I store mostly large files, read a lot and write a little, though writing (ripping) is the more time consuming effort. Suggestions on allocation unit size? FWIW, the final hardware installation looks pretty neat. I went though a considerable amount of trouble routing cables around to minimize air flow obstructions around the array drives, and even made a custom power cable just to fit the drives. It wasn't terribly complicated, but certainly more time/effort than I expect a lot of people to go to when building a home server. The 4 in 3 HDD holder also looks pretty nifty sitting in the front of the case. I'll have to take some pictures eventually.
  13. Darkknight

    Raid 5 technical questions

    Argh, RAID woes! I finally finished building the array yesterday, and initializing finished a few minutes ago. I can definitely say that I cannot find a way to get XP 32bit to view the full 3.8TB in a single array. The logical disk manager doesn't even see the array at all. Using a 3rd party HDD utility, I can see and partition the array, but only for 2048GB. I had hoped that if the array blocks were larger, it wouldn't be a problem, but XP still sees 512b sectors. I don't think I can even make (2) 2TB partitions. I think, when creating the array, if I had done 2 arrays (each half the size - 2TB) windows would be fine with it. So, I'm faced with the choice of breaking the array, and spending another day reiniting it, or upgrading windows. I have learned a few things though: 1) Antec P182 natively only holds 6 HDD. (Had to drive back to M/C 45 min away to get a 4 in 3 HDD device module) 2) Microcenter does not stock 5 of any one type of right angle inexpensive SATA cables, apparently there are 50 or so $20 EL cables available. 3) Do not ask for help finding stuff at M/C. 4) Plan ahead. 5) Intel RST only supports 128k or smaller stripes. (At least on this ICH10R board) Edit: I just thought about something, I believe I can run a lightweight 64bit OS to operate the array via VMware, then partition the array using that o/s into 2 partitions that XP would then be able to pick up on. The VM would have to always be running to access the volume, however. This is an option I had considered before when I was thinking about using unraid vs IRST. Only problem I can see is, the more complicated this gets, the less reliable the array seems, which kind of defeats the purpose.
  14. I'm building a 4TB raid 5 (5x 7k1000), on ICH10R/Corei3 setup. I've read through quite a few topics here, and maybe I'm using the wrong search terms, but I simply cannot find what I'm looking for spelled out in a straight forward manner. I would like a solid recommendation on stripe size, cluster size, and an idea on if I'll need to properly align the partition with the stripes. I was just reading a similar topic on R5 performance on nforce based MBs, but I'm not so clear on if that information applies to my situation. Currently, my build is running XP 32 bit, I'd rather not have to upgrade to W7 unless wholly necessary. I don't want the extra license cost, nor do I want to throw away a fresh fully tweaked setup of XP. I have a separate boot/system drive outside the array, and another separate drive I use for data buffering during media rips/encodes. My array is strictly for AV media storage, and the vast majority of my space is taken up with 1.5GB-5GB video files that I stream via GbE->802.11n->HTPCs in the house. There are thumbnails for each video file, if that matters at all. Music is not stored in the array. Although the array need only output the lowest amount to suffice an HD stream or two, I'd still rather have it built as efficiently as possible, which is why I'm here. I've read conflicting, (or again I may be failing to understand) information on if 32bit XP can read a 4tb array. I'm not booting from the array, I'm only concerned on XP's ability to read/write the full 4TB of the array. I think that Win 32bit/MBR has a cluster limit count, not an array size count, so as long as the cluster size is big enough, 32bit XP can read it, even if it can't boot from it. I understand bios has also has a size limit, but that should be on a per disk basis, not on an array basis. What I have ready suggests XP will place the start of the partition at 63 or something, and it needs to be at 2048? Is that correct or am I mincing terms anywhere above? The disks I'm using utilize a standard 512b sector size AFAIK, I think I'd like to use a cluster size of 4Kb and a strip of 16Kb, but that is really just a guess on my part. Optimally, I'd like the array to be tuned for the fastest writes possible, as that is what takes the longest in my process. Rarely do I need to copy anything back off the array, and I suspect in tuning it for fast writes, the reads will likely still be much faster than the destination disk anyway. Suggestions? Thanks! Edit: TL;DR? Check bottom of PG 2. 280MB/s Write & 490MB/s Read speeds!