Roxor McOwnage

Member
  • Content Count

    27
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Roxor McOwnage

  • Rank
    Member
  1. Roxor McOwnage

    Is 1 GB/s sustained write possible?

    Have you looked at Suns x4500 "Thumper" system (http://www.sun.com/servers/x64/x4500/index.xml)? They claim 2GBps disk-to-memory, and 1GBps disk-to-network. ZFS over 6 8-port SATA2 controllers is seriously fast. Ben Rockwood (http://www.cuddletech.com) has a few blog posts about them, and would probably be happy to answer your questions by email. You can get one to use for a free 60-day eval as well: http://www.sun.com/tryandbuy/rules.jsp. You'd only need to play with it for a few days to see if it meets your needs or not... Regards, Rox
  2. Roxor McOwnage

    Which 500GB Drives should I get?

    If the images are only 3MB each, they'll comfortably sit in RAM for anything done to them in Photoshop. Any drives you buy will be the same speed (i.e. drive manufacturer or RAID 0/1/5 will indistinguishable in speed editing pics that small). Unless you start editing video, drive performance won't be an issue: so buy the best GB/dollar drive you can find. What you should be concerned with is backups. Pictures aren't exactly something you can go out and download again if a drive dies (compared to losing a disk full of DivX's or something). So buy at least 2 disks and set up a backup program to make sure you have a copy of all your wifes pics on that second drive (i.e. make an automatic backup every night). Many people have their backup drive in a USB enclosure so they can easily move it between computers. If you buy a larger USB-attached drive (i.e. 750GB) than your wifes picture drive (i.e. 500GB) then that 750 can backup your important files as well. Win win! RAID 0 is, in my opinion, crazy for a job like this. Sure, digital pics are basically free to take, so you take a lot and they're 99% garbage. But those other 1% are important snapshots in time: memories you can't go back and photograph again: why you'd want to _double_ your risk of data loss for _zero_ performance gain is beyond me. (can you tell I've lost important data to RAID 0 and didn't have backups? )
  3. Roxor McOwnage

    SSD's = Defrag is a thing of the past?

    I think imsabbel is right: there's no benefit to having contiguous blocks of data. Contiguous would really only mean the chunks of data are side-by-side logically in some address space... the bits themselves could be sprayed randomly all over a collection of flash chips and the speed would be the same.
  4. If you have a SSD... with no moving parts... does that mean there's no speed penalty from having files fragmented in many different chunks? i.e. it's the same speed to read 10 chunks of data that are all "beside each other" (does that even make sense in solid-state drives?) as it is to read 10 chunks of data in random locations. Think of all the work that has gone into drive logic to reorder IO to reduce seeks... all the work put into OS's to try to collect reads/writes into contiguous chunks... all the work done by filesystems to try to "do the right thing" to reduce fragmentation... ...is that all wasted effort if SSDs become popular? (related question: does that mean I could fill a SSD to 100% full and it wouldn't get slower as it fills?) Rox
  5. That's some pretty impressive speeds for raw dd transfers! Any chance you could post some bonnie++ results? I know when I first played with dd I got some nice speeds... but bonnie++ showed some rather more dismal "real world" speeds Still, I have 2 raptors now and I'd love to get an idea of what I could do with 4, thanks!
  6. Roxor McOwnage

    RAID5 fileserver recommendations

    People have been talking about RAIDZ, which is basically a smarter RAID5 + ZFS (and RAIDZ2 is a smarter RAID6). But looking outside it's RAID5'ish benefits, ZFS by itself plays _very_ well with differently sized disks. If I put a 100GB and 200GB in a ZFS "pool" (ignoring any mirroring/striping) it will intelligently stripe reads/writes across those 2 drives without me having to tell it specifically to stripe anything. ZFS will automagically use every disk/spindle it has control over. And if you add a 3rd disk (say a 320GB) dynamically to the pool, it will instantly start striping over all 3 drives (as much as it can). In Linux you either have to us DM to explicitly force a fixed stripe size across equally sized partitions, or use LVM to slam mismatched sized drives together into one big device to put a filesystem on. In the DM case ZFS is better because it writes variable-sized stripes... if you write a lot of data it writes wide/large stripes... with a tiny bit of data it writes small/narrow stripes. Gone are the days you fiddle with a half-dozen RAID stripe sizes and run bonnie++ for hours to find your "ideal" size. As for LVM... when you're simply spanning different disks it's always reading/writing to just one device.... ZFS will read/write to all devices.. no comparison there: ZFS is faster and scales with the number of disk you have What does this mean to a home user using vanilla ZFS? Go out and buy another larger drive and slap it in your computer and as soon as you run one command to add it to your existing ZFS pool your filesystem not only got larger (no running special 'grow' operations, or reformatting), it also got faster (same load spread over more spindles). Sweet! I have 2.4TB across 14 disks in a Gentoo system.. split into 2 RAID5 arrays built by DM... using LVM to slam those 2 arrays into one big filesystem. I know what Linux has to offer in filesystems and volume managment. ZFS is simply better in almost every possible way. The downsides are Solaris x86 supports a more limited set of IDE/SATA chipsets than Linux... and Solaris 10 isn't exactly a desktop OS Bit
  7. Roxor McOwnage

    Windows Vista 64bit, AMD X2 or C2D?

    Well, AMD and Intel are still using (almost) identical x64 instruction sets, so it's not like a 64bit OS can call faster different instructions on AMD than Intel. phoenix and ehurtley are right that AMD does have better/faster access to memory which did make it few percent faster than Intel... it's just todays Intel dual/quad-core "Core" CPUs are so significantly faster in raw CPU speed that they more than make up for any loss in memory speed. NUMA just lets the OS know that not all memory is the same... that some is faster for a particular CPU than the rest... so the OS can be smart about keeping the memory for a process and the processor its running on together. But this buys you the most speed if you're using apps that constantly use large chunks of memory... which isn't something most desktop apps do. Like ehurtley says, it's more of a win for supercomputers, grids, and scientific/financial computing than for the average home user browsing the web and playing games. I've used only AMD for many many years now, since it was the best bang-per-buck. But if I had to buy a new system today I wouldn't hesitate to buy a dual/quad-core "Core" CPU from Intel... since it's faster for the money, has good headroom for overclocking, and is just an overall better value.
  8. Roxor McOwnage

    Best Data Backup Method?

    Another vote for rsync. It works over a network as easily as to an external drive, and it's smart about moving only the things that have changed between source and destination so it's very quick. And it's free I actually have 2 computers back up to each other over the network. As an added bonus if you're backing up to a Linux/Unix filesystem you can have multi-day snapshots that use very very little disk space: http://www.mikerubel.org/computers/rsync_snapshots/ My 2 systems rotate automatically through 2 weeks of online snapshots. Which means if I accidentally erase something I can just go into any one of 14 directories (each directory representing a complete backup from a day in the last 2 weeks) and recover the files I need. And I don't need 14x the storage (since although each directory looks complete... they all only store the changes between one day and the next) so I really only need around 1.5x-2x the space. Pretty slick if you ask me.
  9. Roxor McOwnage

    RAID5 fileserver recommendations

    If you've had experience with Sun environments, have you considered Solaris 10 x86 instead? For serving up bits over the network Debian isn't going to get you anything extra, and they both cost the same. But instead of agonizing over the best bang-per-buck hardware RAID cards for Linux... you may get better data consistency, flexibility, and performance by just buying cheap PCI/PCIe/PCIx cards and feeding them to ZFS: http://en.wikipedia.org/wiki/Zfs Yes, Linux supports a wider variety of IDE/SATA cards, but the Sol10 HCL gets longer every day, and there are many forums for Sol10/OpenSolaris/SolarisExpress full of people who can help you make the correct hardware choice. I use Debian and Gentoo at home myself, but use Solaris at work, and my next fileserver will use ZFS. Something to think about. Happy New Year!
  10. Hi Adde, You're right: maybe the idea was good but I just didn't go far enough with it. 4 drives was convenient because of port and internal space issues: but 2 more drives may do the trick if I get a bigger case. So far I'm just complaining about the "feel" of the system: if I do get a couple more drives I'll be sure to post some real benchmarks of my findings. Regards, Rox
  11. For big, cheap and reliable software RAID 5 seems to be the way to go, and it's what I use for a home fileserver. No real need for speed. But at work my desktop typically runs a few VMWare images with Windows/Linux/Solaris x86 inside. I had a setup with 2 74GB Raptors, but for space and failure-recovery reasons swapped them out for 4x160GB RAID 10. After all, 4 drives must be better than 2, right? Although I got the extra space and a warm-n-fuzzy feeling from the mirrored disks.... the striping and higher STR didn't help at all. In fact, things are noticably more sluggish now. I can only guess that since VMWare is really ramming an entire filesystem inside a large file on the underlying OS there must be some crazy seeking going on inside those big files? Or before where I put a couple VM's on each disk now they're all competing for the same array? Anyways, I learned my lesson. The money would have been better spent on more (or larger) Raptors. And I thought I was being so clever and frugal... Bah! Live and learn... Rox
  12. Roxor McOwnage

    How big is your array?

    Logical Volume Management (LVM: examples here) lets you take individual disks, or partitions, or RAID arrays and make pools ("volume groups") out of them.... which you can them carve filesystem out of. In my case I just used it to join the 8X200 and 5x320 disks into a single pool, then carved one large filesystem out of it. So when I write to disk I don't actually know which drive is getting the data, nor do I care. There are lots of reasons people use volume management, but the best feature for me is I can "grow" my filesystems on the fly. Put more disks in the box, use the LVM tools to add those disks to my current pool, then tell the filesystem to resize itself.... while people are still using it. Regards, Rox
  13. Roxor McOwnage

    How big is your array?

    8x200GB RAID 5 + 5x320GB RAID 5, LVM'd together, and dumped on the network with Samba: ...though I've effectively lost 100GB of the total due to some filesystem, kernel, or fragmentation problem. Rox
  14. Roxor McOwnage

    Linux software RAID5 over 2 controllers

    I have software RAID 5 spread over 8 ports on a 3Ware controller, 2 Promise cards, and one onboard motherboard IDE port... all with the 2.6 series kernels. They're merged into 2 RAID 5 arrays, then LVM stitches both arrays together to make one big ReiserFS filesystem. No corruption issues so far: even surviving many power failures etc. I did have problems with a couple drives occasionally dropping out the the array.. but they tested OK, so I'd just merge them back in. It turned out to be flakey power Y cables.. after replacing them it's been fine for months. So, at least with the 2.6 series... I'm using at least 3 differnet chipsets spread over 3 cards and my motherboard, and it works fine. You shouldn't have any problems. Regards, Rox
  15. Roxor McOwnage

    combining 2 broadband connections

    Just a note: no dual-wan router, hardware or software, or any magic you perform with Linux will by themselves do what Ron_Jeremy is asking... which is to combine the upload capacity of both cable modems. At best they all make it easy to manage and load-balance both connections and handle failures of either connection automatically. You need support on the other end, which as xSTLx points out would be best if your ISP does it for you. You see that type of thing most often with ISDN and dial-up lines from some providers. Google for "multilink PPP" (MLPPP). Regards, Rox