• Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About geshel

  • Rank
  1. Retail pricing listed here:
  2. You've got things a bit flipped-around: these tests were performed by reading and writing directly at a block-level to the drive. No filesystem was involved at all. There are no files, just un-named and un-indexed blocks of data. Filesystems like NTFS sit "above" the block-level; they essentially manage those blocks and present a file-based interface.
  3. Possibly but I wouldn't count on it. The Adaptec RAID will likely start the resulting volume at some sector that's not the first - using some sectors at the beginning for metadata (info on what RAID set it's part of etc). So the boot information and everything else will be at the wrong offset, and likely just not found or considered corrupt.
  4. Oh, it's just hard to find / non-obvious. Not a lot of details there either, though they do have a price configurator online.
  5. I think if you align the stripe size to the erase-block size of the SSDs, that (the RAID10) does sound good. Normally for spinning disks, I'd blather on a bit about how a 4-disk stripe can actually be slower than a 2-disk mirror for some workloads (random reads with an IO size greater than the stripe size but less than the full stripe-width), but for SSDs I think it's a moot point.
  6. geshel

    Help with LSI MegaRAID settings

    I think that's good. At first I was going to say: if you're turning off write cache (because of no BBU), then turn off disk cache as well. However, I don't think you have too much to worry about in the case of power loss if this is a movie storage device - you won't be writing too it that much? And if you are writing a movie when the power goes off, you'd just have to record that movie again anyway.
  7. geshel

    Disk choice for RAID50

    Can't say too much about the drives, we are looking at something that uses the Hitachi but don't have any time invested in either yet. However. . .a 6-disk RAID50? If you're not going to be IO-intensive, I'd say a RAID6 is the better call. Also, if by "expand later" you mean expanding the hw RAID group, then IME this takes an extremely long time. LIke, weeks, when you're talking about several TB of data.
  8. Hmm, not seeing anything about it on the Dell website.
  9. The rails they shipped with the DL180 G5? I think it is are actually just little shelves. They install easily but the fact that the server will fall on the floor if you unwittingly pull it out a bit too far. . .and this is an 80-pound server. . .
  10. I wish HP's online configurator was better. Honestly it's a complete mess. They list a dozen different models, some are "configurable" and some aren't. The page that does list the configurable models doesn't give any indication what number of drive trays are included. I just want to know if this server is available with 25 SFF drives - there's one line on one page that makes me think it is, but I can't seem to find a configuration that actually supports that.
  11. Wait, they're using RAM drives for the ZIL? Isn't that a bad idea? Maybe I don't understand ZFS well enough. . .
  12. geshel

    2TB 7200rpm vs 2x1TB 5400rpm RAID 0?

    It depends on the workload and the stripe size. For random IO, there are cases where the single drive would be faster than two slower drives in RAID0. I'm a mostly random-IO kind of guy :-) but my guess is that for sequential IO (eg you're never editing more than one video at a time, and your editing setup doesn't involve writing results to disk at the same time as it's reading), the two drives will be faster *assuming* apples-to-apples like orion24 mentioned. But all the considerations orion24 mentioned also come into play. My opinion, don't try to use the laptop drive for this workload. Get a good fast 3.5" drive and if you need more speed later, add more (you're thinking of using RAID0 so to me this means you don't care about the data on the disks - eg this is temp space and your real storage is elsewhere, so it should be no problem to change that single drive into a RAID0 later and re-populate it).
  13. So, using raidz2 across nodes via ATA-over-Ethernet is kind of cool. It does let them use a RAIN setup rather than RAID. But I'm not sure what the advantage of that is. The ZX head unit is still 1. a single point of failure 2. a bottleneck, as all data to the clients must pass through it I guess if you need a huge volume of storage but aren't worried about HA or throughput, then this would work . . .