• Content Count

  • Joined

  • Last visited

Everything posted by geshel

  1. Retail pricing listed here:
  2. You've got things a bit flipped-around: these tests were performed by reading and writing directly at a block-level to the drive. No filesystem was involved at all. There are no files, just un-named and un-indexed blocks of data. Filesystems like NTFS sit "above" the block-level; they essentially manage those blocks and present a file-based interface.
  3. Possibly but I wouldn't count on it. The Adaptec RAID will likely start the resulting volume at some sector that's not the first - using some sectors at the beginning for metadata (info on what RAID set it's part of etc). So the boot information and everything else will be at the wrong offset, and likely just not found or considered corrupt.
  4. Oh, it's just hard to find / non-obvious. Not a lot of details there either, though they do have a price configurator online.
  5. I think if you align the stripe size to the erase-block size of the SSDs, that (the RAID10) does sound good. Normally for spinning disks, I'd blather on a bit about how a 4-disk stripe can actually be slower than a 2-disk mirror for some workloads (random reads with an IO size greater than the stripe size but less than the full stripe-width), but for SSDs I think it's a moot point.
  6. geshel

    Help with LSI MegaRAID settings

    I think that's good. At first I was going to say: if you're turning off write cache (because of no BBU), then turn off disk cache as well. However, I don't think you have too much to worry about in the case of power loss if this is a movie storage device - you won't be writing too it that much? And if you are writing a movie when the power goes off, you'd just have to record that movie again anyway.
  7. geshel

    Disk choice for RAID50

    Can't say too much about the drives, we are looking at something that uses the Hitachi but don't have any time invested in either yet. However. . .a 6-disk RAID50? If you're not going to be IO-intensive, I'd say a RAID6 is the better call. Also, if by "expand later" you mean expanding the hw RAID group, then IME this takes an extremely long time. LIke, weeks, when you're talking about several TB of data.
  8. Hmm, not seeing anything about it on the Dell website.
  9. The rails they shipped with the DL180 G5? I think it is are actually just little shelves. They install easily but the fact that the server will fall on the floor if you unwittingly pull it out a bit too far. . .and this is an 80-pound server. . .
  10. I wish HP's online configurator was better. Honestly it's a complete mess. They list a dozen different models, some are "configurable" and some aren't. The page that does list the configurable models doesn't give any indication what number of drive trays are included. I just want to know if this server is available with 25 SFF drives - there's one line on one page that makes me think it is, but I can't seem to find a configuration that actually supports that.
  11. Wait, they're using RAM drives for the ZIL? Isn't that a bad idea? Maybe I don't understand ZFS well enough. . .
  12. geshel

    2TB 7200rpm vs 2x1TB 5400rpm RAID 0?

    It depends on the workload and the stripe size. For random IO, there are cases where the single drive would be faster than two slower drives in RAID0. I'm a mostly random-IO kind of guy :-) but my guess is that for sequential IO (eg you're never editing more than one video at a time, and your editing setup doesn't involve writing results to disk at the same time as it's reading), the two drives will be faster *assuming* apples-to-apples like orion24 mentioned. But all the considerations orion24 mentioned also come into play. My opinion, don't try to use the laptop drive for this workload. Get a good fast 3.5" drive and if you need more speed later, add more (you're thinking of using RAID0 so to me this means you don't care about the data on the disks - eg this is temp space and your real storage is elsewhere, so it should be no problem to change that single drive into a RAID0 later and re-populate it).
  13. So, using raidz2 across nodes via ATA-over-Ethernet is kind of cool. It does let them use a RAIN setup rather than RAID. But I'm not sure what the advantage of that is. The ZX head unit is still 1. a single point of failure 2. a bottleneck, as all data to the clients must pass through it I guess if you need a huge volume of storage but aren't worried about HA or throughput, then this would work . . .
  14. Hardware. Adaptec 6805. In a Supermicro chassis using their SAS2-216EL1 backplane. The drives negotiate at 3Gbps instead of 6Gbps for some reason, which has always irritated me but perhaps they work better that way? I'm probably moving to Toshiba 10K SAS drives now though since they are performing better in the same setup.
  15. I should mention they are operating in 20-disk RAID5 groups. . .
  16. Still no problems with mine, over 200 of them working pretty hard for 6+ months now.
  17. geshel

    Raid 10, auto restore data hard drive?

    What are you using to implement the RAID? I would guess that you need to go to the control panel interface for whatever RAID card you're using (it must be hardware, because you installed Windows after creating the array. . .?), and see what it says. Most will start rebuilding automatically though. Also, RAID10 isn't like you described. It is a stripe of mirrors, not a mirror of stripes. So A and B would be mirrored, and C and D mirrored, then data striped across the two sets. If it's really RAID10.
  18. Probably about as many zeros as were written to the drive! ;-)
  19. I'm definitely keeping these on my radar.
  20. geshel

    Next Generation Harddrives

    More per-disk cache and faster disk interfaces can have, from what I've seen, an incremental increase on performance in some real-world use cases. But when talking about RAID performance, the hierarchy really goes like this 1. Num disks x spindle speed 2. RAID configuration 3. stripe size and IO size (a complicated, non-linear relationship) 4. RAID controller shortcomings (because I've never found a RAID controller that made a disk array perform faster than I expected, only worse). 5. OS / application concerns 6. maybe disk cache and per-disk interface speeds (at least regarding 3G v. 6G sata)
  21. Are these prices as aggressive as I think they are? I haven't done a google check yet, but seems like $500 for a 480GB 2.5" SSD is pretty darn good, and a 1TB for $1,000 is awesome. This is getting *really* close to the point in the curve where I would switch from HDD to SSD for the bulk of my uses. . .