jonnyz2

Member
  • Content Count

    19
  • Joined

  • Last visited

Community Reputation

1 Neutral

About jonnyz2

  • Rank
    Member
  1. So, here are the drive choices that seem to make the most sense: Toshiba PH3300U-1I72, 3TB, Best reviews, best reported read/write performance (180MB/s), $118 delivered, 3 year warrantyWD Red WD30EFRX 3 TB NAS Drive, So/So reviews. 145MB/s, $147 delivered, I believe these are 3 year warrantyWestern Digital Enterprise RE4-GP, 2TB, so/so reviews, 150MB/s, $120 delivered, 5 year warranty. Cost wise, I would need to buy twice as many to meet storage needs in the near termThoughts/advice?
  2. dietrc70, thanks!I didn't know you could have a hot spare on Raid 10. Is the special enclosure required for that? If not, the case I already have has space for 6 drives; so, I could configure it in an existing case without a new purchase. The one thing I would not get is an audible alarm for the fan; however, there are software tools to measure temps in the case and HDD temps that should suffice (as long as I'm at the workstation and NOT using it as a remote NAS). I had been under the impression that Intel's raid controller and software was not OS specific. Is that incorrect? If you could tell me more about what is OS specific I would appreciate that. By the way, Crashplan is the vendor I'm considering for backups as they offer a seed drive service. Are you happy with them? Thanks!
  3. Thanks for the TLER tip. I had read about the drive setting but not that WD Black could have a problem. I agree with your point about time having real value and I don’t mind changing plans. Let’s me share more of the situation background so that everyone can better understand my needs. I built a NAS before they were readily available for SOHO use. This worked fine for everything except photo processing. The combined throughput of the NICs, network, drives etc. was fairly poor and I discovered somewhat late that it was dramatically reducing working speeds with larger photo files when I was processing on my workstation and saving to the NAS. I asked for advice on upgrading my machine and several photo experts suggested first moving the image storage, at least for working files, back to the workstation itself before looking at CPU/MOBO upgrades. They also suggested an SSD for OS, page file, etc. They were right! I made both changes some time ago and it provided a huge improvement for opening files, working on files (due to paging), and saving files. To your point about RAM and a RAM disk, I will shortly be setting up a new workstation with 32gb ram which should provide more than enough space to fully process the largest files in memory w/o any Photoshop disk paging. Because of my previous experience with the NAS, I have felt that moving my images back to a NAS would be step backwards in terms of working time. Now, if my work style was one image at a time, it would probably not be a big deal to work on it on the workstation and final save to the NAS. However, on a given shoot/trip, I might come back with several thousand images. In the past, that has been somewhere between 40-80GB for each image set. By the way, that’s in compressed raw format of about 14MB per image. As soon as an image is opened in Photoshop and resaved, it will hit at least 40MB. It’s not uncommon though for a file to be 100-300MB and in some cases over a GB for large stitched panorama files. Fortunately, I typically will only process about 5% of the images in a given set with Photoshop; so, most will stay around 14MB. The size of the files and the size of the entire sets will at least triple if I upgrade my camera to one of the larger sensor cameras that are currently available. I tend to work on the entire set of images for a period of time. Because of that, I want those images on a fast read/write storage platform, with data redundancy. I have assumed this means working on the set on HDDs or SSDs directly attached to the Mobo. If that assumption is correct, then midterm storage is definitely workstation based. The next question is one of long term storage. Do I just keep the images in a large Raid 1 on the workstation or migrate an entire image set to a NAS after processing? I’m not sure though that a second machine solves the single point of failure question as the NAS also has a single point of failure with its MOBO. Or is the suggestion to use Raid 1 in the workstation AND back up everything to a Raid on the NAS? As mentioned in one of my first posts at the top of the thread, I’m also doing some offsite backup as well. In addition to that, I’m considering starting with a cloud storage company for a better offsite backup program. So, I now have more questions than I started with. This is really raising questions about the best working storage strategy, longer term storage and backup strategy, AND which Raid level to use for the workstation and/or for the NAS. Again, any advice will be appreciated!
  4. MRFS and others.... So, if you are in my situation where data security is paramount, would you use 2x mirrored arrays at 2TB each mirror, splitting data between the two arrays manually, or would you use a raid 10 setup for a total of 4tb and no need to split data? While I'm aware that I could have a raid 6, I decided a few years back that doing so in a SOHO environment, w/o adequate drive monitoring and alerts was complicated and risky as two drives might fail and I might not be aware the failure in time. I suppose though that same risk applies to raid 10. Whatever I end up doing, it will have to be configurable on an Intel Z87 platform with RST or other software to leverage the Z87 platform (max 6 drives). Advice appreciated. Thanks!
  5. Hi Kevin, Thanks for sharing your experience! I have no idea what sort of drives they are shipping and agree that their needs may be more related to shipping than the sort of use I'm thinking of. My use is not high on-time or high volume read/writes but occasional use of a photo processing workstation where I will generally write/read large files (20MB to 1GB (if multi-image stiched file)) while working on an image and once I've finished with a given image I may occassionally access it and other images one or more times in the future. My current setup in an SSD for OS and "User" files and a raid mirrored pair (2x2tb) for my photo file storage. I also backup that mirror (with an identical drive) and take it offsite for offsite storage. For the latter, I tend to do this after trips or major processing sessions which is ok but not really a robust offsite storage approach. When I started looking into new drives I was thinking regular consumer drives (e.g. TOSHIBA PH3300U-1I72 3TB for $110 which receives better than average reviews). I've also looked at higher end drives and they seem to be 2-3x more expensive (e.g. WD Black 4TB at $270 or Deskstar 7K4000 3TB at $260) and, depending on the exact drive, some of the reviews don't seem to be much better than the lower cost drives. I'm not keen on spending two or three times more unless the drives are truly better. At the same time, I don't want to be "penny wise and pound foolish" - after all, I do want a reliable setup for protecting important files! Again, everyone's thoughts/ideas are appreciated! Thanks!
  6. I was just speaking to a major cloud storage company that offers a seed drive service (to get started with large amounts of data). They mentioned that they don't use seed drives larger than 1TB for reliability reasons. In particular, the tech guy told me they find drives at 3tb or above to be signficantly less reliable than smaller drives. That annecdote fits with what I observe reading online reviews...I've looked at a LOT of 3TB drive reviews (from every manufacturer Newegg sells) and see that most 3TB drives get around 50% 5 star ratings (lower than I'd like to see) with many drives getting 18-25% 1 star ratings which usually site DOA, short term failures, and/or data curruption. The failure rates being reported on these drives (in some cases with 700+ reviews) seems quite high. So, I'm wondering, since drive reliability is key for me, whether I should stay with 2tB drives. This will be kind of a pain since I'm already running out of space on a 2x2TB mirror setup (lots of large photography files). So, while I'm looking for new drives for photo storage, I would rather buy 4x 2TB if that would be more reliable than 2x 4tb (just as an example). I'd appreciate your thoughts!
  7. Hi, Searching for a while on Intel's site didn't yeild anything specific; so, I thought I'd ask here. I'm trying to optimize for photo processing and accessing a NAS for my image storage is just too slow for the really large (1.7TB!) photo files that I'm currently working on. My current setup is a photo processing workstation with file storage on a home built NAS (based on a consumer mobo with the intel ICH9R chip). My photos are stored in a Raid 1 pair that was created with Intel's Matrix Storage software. To move to my photo processing computer, I would like to take the two disks with photos on them and attach them to a different mobo with an ICH8R chip with Intel RST installed. If anyone knows whether this is possible please let me know. Also, if there are any special steps or things to be careful to avoid, please let me know. Thanks!
  8. Hi, Searching for a while on Intel's site didn't yeild anything specific; so, I thought I'd ask here. I'm trying to optimize for photo processing and accessing a NAS for my image storage is just too slow for the really large (1.7TB!) photo files that I'm currently working on. My current setup is a photo processing workstation with file storage on a home built NAS (based on a consumer mobo with the intel ICH9R chip). My photos are stored in a Raid 1 pair that was created with Intel's Matrix Storage software. To move to my photo processing computer, I would like to take the two disks with photos on them and attach them to a different mobo with an ICH8R chip with Intel RST installed. If anyone knows whether this is possible please let me know. Also, if there are any special steps or things to be careful to avoid, please let me know. Thanks!
  9. jonnyz2

    Zombie Drive?

    No problems with it. I ran the tests out of couriousity. It was the bad result reported by Victoria that made me concerned. Your note makes me think I'm misunderstanding the victoria message. Is the Good/Bad Smart rating on Victoria based on overall smart stats or something else? Thanks!
  10. jonnyz2

    Zombie Drive?

    Hi all, I've got a drive that passes WD Data Lifeguard Diagnoistics' short and long tests but I get a Smart = Bad result with Victoria (see below). I've already backed this up just in case but am wondering if I should continue using the drive or retire it. Advice? Thanks!
  11. This compares Ciproco VST on the ICH9R (i.e. no Ciprico controller) versus Intel's Matrix Storage on the ICH9R.
  12. Corrected image link for Test 1 Parameters: Test 1 Parameters
  13. I've been running some tests to help me decide on what way to go with a home file server setup. I thought that others might find it interesting. Background • IO Meter Tests (see links below for actual parameters) o The two tests were intended to represent a small number of simultaneous users accessing relatively large files like full rez music, video, and reading/writing large Photoshop files o Test two, key differences, larger file access, slightly more sequential access • Arrays o Array size: 20gb o Raid 10 of 4 drives o Raid 5 of 5 drives • Stripe sizes: o Matrix at default of 64K except where noted on Test 2 Matrix Raid 5 results o Ciprico, default and unknown (there does not seem to be a way to check or adjust it) • Formatting: NTFS, default settings, quick format Reading Results (results accessible via links below) • Fairly self explanatory overall • The words “On†and “Off†refer to write back cache and read/write Cache (Matrix and Ciprico respectively) being on or off • Actual results are in the first table and there is a percentage comparison in the second table • The percentage table compares various raid array results to the results of that same test with a single drive Observations • Ciprico performance for these tests is substantially better than Matrix, especially with raid 5 • Note that write back cache decreased performance on the Matrix raid arrays. This is NOT consistent with previous tests using ATTO where enabling write back cache substantially improved write performance Next Steps • Thinking about running the same tests on three mirrored-pairs to see how that compares Questions • Is it correct to assume a raid 10 with 6 drives would actually be faster than a raid 10 of four drives? Test 1 Parameters Test 2 Paramters Test 1 Results Test 2 Results
  14. Notes/questions embedded below. Thanks in advance! Note: I assume you have 5 drives on the raid 5 array which means a 4x drive capacity with no "hot spare". If you want hot spare, you need 6 drives, otherwise you will have a problem to set a stripe size of 16K 32K 64K or 128K. * Yes, I have been testing a 5 drive raid 5. If I decide that will be my final solution, I will get a sixth disk for a hot spare. 1 "modern" drive should deliver from 60MB/s (central part) to 100MB/s (edge part) * My drives max at about 70MB/s. There’s a firware update that supposedly can get them to 84. 2 drives in raid 0 should deliver up to 200MB/s on sequential access * Have not tested 4 drives in raid 10 should deliver up to 200MB/s and your results looks ok on this one (to optimize for multiuser access, have NTFS volume pagesize equals to 1 drive block size thus allowing for parallel use of drives) * This is a bit above my knowledge level. I'm going to look up some of these terms. In the mean time, can you point me to a link that explains how to do this or provide directions? 4 drives from a (5 drives) raid 5 will deliver up to 400MB/s if it's well configured (stripe size of 16/32/64K with block size of 4/8/16K + NTFS volumes using "full stripe size" page + NTFS volume offset aligned to the stripe size) * This is a bit above my knowledge level. I'm going to look up some of these terms. In the mean time, can you point me to a link that explains how to do this or provide directions? But raid 5 delivers (at best) 25% less direct io (=the "multiuser" usage) than your raid 10 array. * Since the ICH9R is limited to 4 drive raid 10, I'd like the extra capacity I can get with raid 5. So, if I can achieve good performance with raid 5 I'll be happy. Any idea what MB/S would be required to support two full rez music streams and large file photoshop read/write? I really have no idea what my minimum performance level requirement should be for my planned use. ==> Anyway, you are very close to a singleuser usage Not sure I understand this. Do you mean that my intended use should be modeled in IO Meter as a single worker? Any recommendations for the correct IOMeter test setup?
  15. I’ve been trying to assess options for a home file server raid setup. I originally planned on a raid 5. Since then I’ve read that raid 10 is better for multiple user access. So, I’ve been trying to test options with IOMeter and am very surprised at the results. I’m using an ICH9R implementation. This will either be 4 drive raid 10 (drive limit of chipset), or a 5 drive raid 5 with a hot spare. I’ve decided that I will use write back cache with a UPS if I use raid 5. The surprising results are that with two IOMeter workers set up identically, 64k transfer, 75% sequential, 25% random, and 75% read, 25% write, performance goes from a max read/write of around 200mb/s to about 16. I get close to the same result whether it’s the raid 10 setup or raid 5 w/ write back ON. I’m far from knowledgeable on the topic, but this is far lower than I was guessing. I'm assuming that I'm doing something wrong with the test setup. So, trying to think out my real world worst case usage, it would be like this….simultaneously support streaming full resolution music to two PCs and to be able to read/write large Photoshop files (up to 300mb). Alternatively, simultaneously support full rez video and the photoshop use. Could someone provide advice for how to properly setup the IOMeter test for this planned usage? Thanks in advance!