Bit

Member
  • Content Count

    27
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Bit

  • Rank
    Member
  1. Bit

    Intel X25-M Impressions

    Oops, now that I've come home and read my post again, it sounds more snarky than funny... not what I was aiming for. My apologies: long day at work... Bit
  2. Bit

    Intel X25-M Impressions

    So, you installed a SSD in your system. A system that you don't regularily run any 'real' disk-intensive applications on. And a system that you specifically configured (8GB RAM, no swapfile) to avoid using the disk subsystem. And the result was the SSD didn't seem like an improvement. Was the SSD supposed to overclock your CPU 30%? Double your memory bandwidth? SLI your videocard? On a PC tailored to not use the disk... you bought a faster disk. Right. I'm off to buy a new inkjet printer: I hear it will double my download speed!...
  3. Yikes! If you can afford a 750, then you have enough money to "pin your hopes" on having all your files backed up on a seperate drive, no? That way you're not rolling dice on if your bits will still be there tomorrow or not. I don't know the exact prices, but for the price of a 750 could you maybe buy a 250, a 400, and a USB2/eSATA enclosure? Use the 250 for storage, and put the 400 in the enclosure. Any backup program can automatically keep 1-2 weeks of differential backups on that 400, so if you ever lose the 250 (or more commonly: erase a file you wish you hadn't) you can get things back from any point in the last 7-14 days. If you have 2 computers you could also have them back each other up over the network: that's what I do. This probably isn't what you wanted to hear: but disk space is dirt cheap: disks are worth nothing. Knowing your data is safely protected on a seperate disk run by an automatic program that you don't have to think about is worth a lot. This is assuming your data is important, and not disposable stuff you could easily get again with a P2P app: in that case buy the 750 Regards, Bit (Oh, and this setup works nice over time as well: when you need more space you buy a large disk... make it your backup disk... and your old backup disk becomes you new storage space. Rinse and repeat as needed...)
  4. I read a lot here, but post very rarely. I actually find the reviews aren't the interesting part, since you're measuring isolated metrics. What I like is the people in the forums who say "I have a box with <X> drives in configuration <Y>, on OS <Z>, and I find I app <A> can stream <B> MB's over my GigE network". In those scenarios the actual drives you're using may not be terribly important: it's all the other layers you add on for reliablity/performance/manageability etc.... If 2 people spend about the same amount on "storage infrastructure" but one guy gets 20MB/s on the other end of his network and the other gets 50MB/s... that's info that's very useful in the real world. Reviews of poor-mans "enterprise" storage options/configurations (or even just more on cheap NAS devices) would be harder though. Many more variables to control = more things that can go wrong = more people complaining on the forums when they don't get the same results you do. I'm not sure why I posted this... it's a bit off topic... I should get back to work Regards, Bit
  5. Bit

    High performance desktop storage

    The other posters are right on. When you start talking "enterprise" then you're getting into SAN territory... where you essentially have refrigertor-sized arrays of drives talking over FC, fed from multiple power supplies... and with a big, fat, battery-backed RAM cache for all IO. For home use it's ususally best to have multiple non-raid drives. The next step up could be something like 8 SATA drives hanging off a PCIe controller, RAID-10. Bit
  6. Bit

    Enterprise OS managment

    Hi Frank, Look at Sun's N1 SPS (Service Provisioning System): http://www.sun.com/software/products/service_provisioning/ It's free to use, and you can buy support and training if you decide you need it. I've worked with several customers that have it deploying full 3-tier architectures, including configuring clusters etc on-the-fly. Mind you even though it uses XML fairly heavily you also sometimes want to use it to kick off scripts etc. You cannot escape some scripting if you're using Unix Another thing to look at that's a bit lighter weight and more for patching and simple apps is Sun Connection (http://www.sun.com/service/sunconnection/index.jsp). Yes, I work for a Sun partner, so I'm a bit biased. But these tools support your OS, are free, and can support the volume of systems you're maintaining. Regards, Bit
  7. Bit

    About 2Tb limit

    This is a bit off-topic: is th 2TB restriction just for filesystems Windows controls itself? My WinXP Pro system uses a network share that's larger than 2TB with no problems. Regards, Bit
  8. You won't notice/feel a difference between SATA 1 (150MB/s) and SATA 2 (300MB/s), unless your primary use for the computer is to run benchmarks. Even SATA 1 has a transfer rate around double what the fastest hard drives on the market can sustain. And I don't think eSATA is worth buying a controller for, given the convenience, cheap price, and wide availabiltiy of USB2 external enclosures. So, I'd say spend your money elsewhere (larger drives, drive redundancy etc). Or if your motherboard only has USB1, perhaps a USB2 controller for the external drive etc... This is all just my personal opinion, I'd love to hear what other people think. Regards, Bit
  9. Bit

    1 large array or two small ones

    I vote for buying those 2 more 160's, but then making the Raptors RAID 1 and the 4 x 160's RAID5. You'd keep the snappiness of the OS on the Raptors, gain about 120GB of space, and gain fault tolerance from both arrays. You'd still need your backups, and you said fault-tolerance isn't your main concern: but this way makes the difference between a blown-drive-but-still-usable-system and spending hours/days recovering your system from a backup, potentially losing work since the last backup, and being forced to track down replacement drives immediately etc... Ok, ok, not answering your question... it's just drives are so cheap these days it seems silly not to get yourself some fault-tolerance if you're upgrading anyways Bit
  10. I'd lean towards the case full of drives + software RAID + Linux approach. Just buy extra PCI SATA cards if your motherboard needs more ports. You'll get the most flexibility in RAID levels and partitioning/LVM options and by not buying hardware RAID cards replacement parts are cheap. Not to mention you have more recovery options if things fail. With software RAID you can reassemble your disks on any Linux system you can plug them into. But if you bought an Areca hardware card and it fails, and the local PC store only has 3ware cards in stock, you're not getting your data back any time soon. Regards, Bit
  11. Bit

    New HDD, exclusively for backups.

    A single external drive in an enclosure seems to exactly what you need. Just go to your local computer store and buy the parts and be done with it! Regards, Bit
  12. Yeah, just as cheap as software raid on windows... I wouldn't want a software raid 5 because of the bad performance and because I wouldnä't feel safe at all knowing that the array might blow up during a simple OS failure. And yes, those do happen to linux as well. Anyhow, I'm leaning towards getting a cheap fileserver for all my backup needs and ditch the raid on my workstation. 217784[/snapback] This is a bit off-topic as Linux seems to be out of the question anyways, but my experience with Linux software RAID is that it is: a ] faster than hardware RAID in all but the most expensive high-end scenarios b ] more easily managed remotely: especially with many cheaper semi-hardware-RAID cards that require you to boot into their BIOS for some operations (which is a pain - there should be no need to reboot) c ] more recoverable: a hardware RAID controller failure, or moving drives to another system, usually require exactly the same model hardware RAID card, or at least a compatible card with the same chipset or manufacturer. Recovering or moving a software RAID array only requires that the system can see the array as a bunch or regular old hard drives d ] more configurable: software RAID lets you combine any block device attached to the computer, not just a specific controller. A great example is something like http://www.drbd.org/, which we use at work to mirror drives real-time between physically seperate computers over GigE. Try that with your 3ware/LSI/Adaptec hardware RAID cards. Anyways, I'll shut up now
  13. Find the cheapest $/GB SATA drive you can, any model, any manufacturer. Buy 3. Software RAID-5 them. Be happy! If you'll be "runing the drive day in and day out" then go for the cheapest solution that wastes the least % storage while still tolerating a drive failure. Plus, as you add drives, the % storage lost to parity goes down with RAID-5 as opposed to mirroring (RAID-1) Bit
  14. Bit

    Linux Software RAID tests

    Nice numbers: I wish more people would post their results. This is a PDF of a old spreadsheet I made when I bought 8 new 200GB WD drives. I tested them on a 3Ware 8-port IDE RAID controller, all in RAID 5 (I wasn't interested in RAID 0 or 1). I tested in hardware RAID and software (using the card as just 8 individual ports) and using ReiserFS and EXT3 (with and without stride settings when making the filesystem). Software RAID was quite a bit faster. Yes, it used more CPU... but really CPU is effectively "free" these days and I'd gladly give up some cycles for faster IO. Your RAID5 write numbers look low: even comparing my numbers with 8 drives against yours with 5. Are you sure the RAID rebuild was complete at the time? http://battlemage.dyndns.org:88/Hardware_v...ftware_RAID.pdf ...or... http://battlemage2.dyndns.org:88/Hardware_...ftware_RAID.pdf Regards, Bit
  15. Bit

    Building a 1TB RAID5 File Server

    Nice setup! Jchung's idea of the 5-into-3 cage is a good idea, if you want to add more space in the future. I have one in this pic. My only other suggestion is that you consider replacing the rounded cables with regular cables: they can lay flat against each other and take up little space: The bottom cables come from a 8-port 3Ware 7500 going into 5 sideways drives on the bottom of the case and the 3 above it (8x200GB). The second diagonal bundle goes from 2 Promise IDE cards + one onboard IDE port to a 5-into-3 cage holding 5x120GB. This in an old pic: at the time the power cables were much more of a mess than the IDE cables! With the boot drive, this case has 14 drives, 1 drive per port (no master/slave), and cabling is pretty straightforward: with rounded cables it'd look like spaghetti! Regards, Bit