Jump to content


Photo

LSI Megaraid 1600 - Degraded Logical Drive Problem


  • You cannot start a new topic
  • Please log in to reply
3 replies to this topic

#1 collinmcclendon

collinmcclendon

    Member

  • Member
  • 12 posts

Posted 16 February 2005 - 03:29 PM

Hello all,
Since I've gotten some pointers on the LSI Megraid 1600 I have in the past, I'd like to ask here if anyone has a clue what is going on with my setup.
I have 4 36 GB drives in RAID5 on a LSI Megraid 1600. Recently drive with ID 3 failed. The 2 logical drives I have are recognized, but now show up in partition magic as unallocated space. Totally blank. Is this some sort of way to "protect" me until I can get the 4th drive back on the controller and start a rebuild? I have a full back up with acronis but I would have to recover my FreeBSD from scratch. I have a drive on advance RMA from Maxtor, but it won't be here for 3 to 4 days. I could restore from backup, but I don't want to mess with the array if there is a chance I can get my data back. Any ideas?
Thanks for any advise,
Collin

Edited by collinmcclendon, 16 February 2005 - 03:30 PM.

#2 MaxtorSCSI

MaxtorSCSI

    Member

  • Member
  • 346 posts

Posted 16 February 2005 - 04:05 PM

Ummm. RAID-5 with 4 36GB HDDs. HDD ID#3 fails. This should leave a 3 drive RAID-5 in fully operational, but "degraded" (not fault tolerant) state. The array should still be functioning.

You say you see 2 logical drives. I assume you had the 4-drive RAID-5 partitioned in to two separate logical drives (why?). Now, these two logical drives appear as unallocated space. So, you know the partition table is still valid, but there's no data in the partitions.

The partitions are "logical" contrivances of the OS. The RAID controller has no way to know how the OS will interpret data on the array, so the RAID controller can't modify that interpretation to "fool" the OS. I don't see how the controller would be able to do what you're thinking it's doing, and if it did, how that would protect anything.

Mostly, when a controller thinks that data integrity is at risk it'll write-protect but still allow full read access. My guess, you've lost your data.

Wait until the replacement drive shows up, rebuild the array with that drive and then if the data doesn't come back, restore from Backup.
Yes, I do actually work at Maxtor. However, the opinions and statements expressed in my posts are my own, they do not represent the policies or opinions of Maxtor Corporation and should not be construed as such!

#3 collinmcclendon

collinmcclendon

    Member

  • Member
  • 12 posts

Posted 16 February 2005 - 04:44 PM

Ummm.  RAID-5 with 4 36GB HDDs.  HDD ID#3 fails.  This should leave a 3 drive RAID-5 in fully operational, but "degraded" (not fault tolerant) state.  The array should still be functioning.

You say you see 2 logical drives.  I assume you had the 4-drive RAID-5 partitioned in to two separate logical drives (why?).  Now, these two logical drives appear as unallocated space.  So, you know the partition table is still valid, but there's no data in the partitions.

The partitions are "logical" contrivances of the OS.  The RAID controller has no way to know how the OS will interpret data on the array, so the RAID controller can't modify that interpretation to "fool" the OS.  I don't see how the controller would be able to do what you're thinking it's doing, and if it did, how that would protect anything. 

Mostly, when a controller thinks that data integrity is at risk it'll write-protect but still allow full read access.  My guess, you've lost your data. 

Wait until the replacement drive shows up, rebuild the array with that drive and then if the data doesn't come back, restore from Backup.

View Post

Thanks for your input! I have 2 logical drives dividing up the RAID 5 array because I run windows XP on the first and FreeBSD on the 2nd. This makes my life much easier when it comes to backing up the system, I just back up the whole drive(s). I was sure that I would be able to use the array in degraded mode, like you confirmed. However, now that both logical drives have blank MBRs, (why I wonder?) I'll have to restore. I was going to wait anyhow on the new drive from Maxtor and rebuild the array. Very strange though.

#4 collinmcclendon

collinmcclendon

    Member

  • Member
  • 12 posts

Posted 19 February 2005 - 03:52 PM

Ummm.  RAID-5 with 4 36GB HDDs.  HDD ID#3 fails.  This should leave a 3 drive RAID-5 in fully operational, but "degraded" (not fault tolerant) state.  The array should still be functioning.

You say you see 2 logical drives.  I assume you had the 4-drive RAID-5 partitioned in to two separate logical drives (why?).  Now, these two logical drives appear as unallocated space.  So, you know the partition table is still valid, but there's no data in the partitions.

The partitions are "logical" contrivances of the OS.  The RAID controller has no way to know how the OS will interpret data on the array, so the RAID controller can't modify that interpretation to "fool" the OS.  I don't see how the controller would be able to do what you're thinking it's doing, and if it did, how that would protect anything. 

Mostly, when a controller thinks that data integrity is at risk it'll write-protect but still allow full read access.  My guess, you've lost your data. 

Wait until the replacement drive shows up, rebuild the array with that drive and then if the data doesn't come back, restore from Backup.

View Post

Thanks for your input! I have 2 logical drives dividing up the RAID 5 array because I run windows XP on the first and FreeBSD on the 2nd. This makes my life much easier when it comes to backing up the system, I just back up the whole drive(s). I was sure that I would be able to use the array in degraded mode, like you confirmed. However, now that both logical drives have blank MBRs, (why I wonder?) I'll have to restore. I was going to wait anyhow on the new drive from Maxtor and rebuild the array. Very strange though.

View Post



Well I did find that with a new scsi cable I was able to force the "failed" drive online. Its been working fine for 24 hours now. Once I did that the logical drives were no longer blank. Very strange.



1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users