Kremlar

Member
  • Content Count

    10
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Kremlar

  • Rank
    Member
  1. I ended up creating 2 virtual disks in the RAID controller itself. Once I did that Windows saw it as 2 separate drives and everything worked without a hitch. Thanks
  2. Kremlar

    "best" RAID configuration?

    You'll get a lot of opinions on this. I'll give you mine. With 6 drives I'd try to at least get 2 separate arrays going. Simultaneous disk access will be faster between 2 arrays than 1 large array with 2 partitions. 3-drive RAID 5 is much slower than 4-drive RAID 5 in my experience. With only 6 drive bays available I don't think a hot spare is worthwhile. I'd go with 1 of the following: (3) RAID 1 arrays (mirrors) or (1) RAID 1 array + (1) 4-drive RAID 5 array. I'd then purchase a spare hard drive to sit and use as a "cold spare" should a failure occur. Good luck!
  3. Wanted to follow up on this. Unfortunately, I was still having intermittent issues with drives dropping out on both servers, but it seemed much less often. I ended up contacting WD again, who sent me an even newer firmware. I went through and updated the firmware on the backplane, RAID controller AND drives once again. Release notes from all 3 firmware updates showed fixes that could possibly help my problem. No issues so far, but it's only been a few weeks. Fingers crossed!
  4. Have a large RAID 6 array (8x 2TB drives, so about 12TB of available storage). I need to maximize available space, but wanted a separate partition for the OS (Windows Server 2008 R2 64-Bit). I figured I'd create a 100GB C: partition for OS and apps, then leave the rest for a D: partition. When installing the OS the paritioning utility initially shows 1 continuous 12TB drive. But, after I create the 100GB OS partition, it then creates a small 'system reserved' partition and divides the remaining space in 2 (a 1.9TB chunk, and a ~10TB chunk). I'd like that remaining space combined so I can create a single 11.9TB partition. Not sure what's going on. Can someone enlighten me? Thanks!
  5. Still looking good! (knock on wood) Maybe an hour because I did it a bit cautiously, maybe a bit less. I had a test system next to the server that I used to upgrade the firmware on each drive. I did the following: - pulled 1 drive from RAID 1, upgrade firmware, reinstalled in server, booted & tested - pulled 2nd drive from RAID 1 and one drive from RAID5, upgrade firmware, reinstalled in server, booted & tested - pulled last 3 drives from server, upgraded firmware, reinstalled in server, booted & tested - repeat for 2nd server
  6. 1 month and no issues so far with this new firmware.
  7. From WD themselves. You have to bark loud, but if you do you'll get someone who can help. I have no idea if this new firmware will help, probably will not know for a couple months since we had a month+ gap between sets of failures.
  8. I updated the firmware of the controller to the latest from LSI, and also loaded the new firmware from WD (4V03). Will post back my results.
  9. I believe it's an LSI 8708EM2. I do notice that LSI has newer firmware available (1/10) than Intel (8/09), but I typically try to stick with Intel tested/approved firmware and drivers whenever possible. Perhaps I'll switch to the latest LSI firmware/drivers in this case. I'm also being give a new firmware from WD to try.
  10. I'm having a serious issue with 12 WD3000HLFS drives installed on 2 production servers running Windows Server 2008 R2 64-Bit and Windows Server 2008 64-Bit. Both servers are configured with identical hardware, the significant specs are as follows: Intel SC5650UP Chassis Intel S3420GPLC Motherboard Intel AXX6DRV3GR Hot-Swap Backplanes Intel SRCSASBB8I RAID Controllers Each server has 6 drives installed (2 in a RAID1, 4 in a RAID5). All drivers/firmwares are up to date on the controller, motherboard, and backplane. Both servers have been experiencing seemingly random drives being marked as "failed" - 6 drives total in about 1.5 months. 3 of drives were replaced, the rest were simply rebuilt using the same drives since we started doubting an actual drive failure. 3 drives dropped in about 1 week, then we had a month or so of success, followed by 3 more drops in a span of less than a week. All drives are reported as running firmware 4V02 per the RAID controller. One sample drive I pulled had a mfg date of 10/7/09. The drives are listed on Intel's hardware compatibility sheet, so I thought we were safe going with them. Intel has not been of any assistance other than pointing me to this thread. WD has not been of any help at all so far. Has anyone had any successful resolution to random RAID dropout issues with these drives?