Sign in to follow this  
Followers 0
collinmcclendon

LSI Megaraid 1600 - Degraded Logical Drive Problem

4 posts in this topic

Hello all,

Since I've gotten some pointers on the LSI Megraid 1600 I have in the past, I'd like to ask here if anyone has a clue what is going on with my setup.

I have 4 36 GB drives in RAID5 on a LSI Megraid 1600. Recently drive with ID 3 failed. The 2 logical drives I have are recognized, but now show up in partition magic as unallocated space. Totally blank. Is this some sort of way to "protect" me until I can get the 4th drive back on the controller and start a rebuild? I have a full back up with acronis but I would have to recover my FreeBSD from scratch. I have a drive on advance RMA from Maxtor, but it won't be here for 3 to 4 days. I could restore from backup, but I don't want to mess with the array if there is a chance I can get my data back. Any ideas?

Thanks for any advise,

Collin

Edited by collinmcclendon

Share this post


Link to post
Share on other sites

Ummm. RAID-5 with 4 36GB HDDs. HDD ID#3 fails. This should leave a 3 drive RAID-5 in fully operational, but "degraded" (not fault tolerant) state. The array should still be functioning.

You say you see 2 logical drives. I assume you had the 4-drive RAID-5 partitioned in to two separate logical drives (why?). Now, these two logical drives appear as unallocated space. So, you know the partition table is still valid, but there's no data in the partitions.

The partitions are "logical" contrivances of the OS. The RAID controller has no way to know how the OS will interpret data on the array, so the RAID controller can't modify that interpretation to "fool" the OS. I don't see how the controller would be able to do what you're thinking it's doing, and if it did, how that would protect anything.

Mostly, when a controller thinks that data integrity is at risk it'll write-protect but still allow full read access. My guess, you've lost your data.

Wait until the replacement drive shows up, rebuild the array with that drive and then if the data doesn't come back, restore from Backup.

Share this post


Link to post
Share on other sites
Ummm.  RAID-5 with 4 36GB HDDs.  HDD ID#3 fails.  This should leave a 3 drive RAID-5 in fully operational, but "degraded" (not fault tolerant) state.  The array should still be functioning.

You say you see 2 logical drives.  I assume you had the 4-drive RAID-5 partitioned in to two separate logical drives (why?).  Now, these two logical drives appear as unallocated space.  So, you know the partition table is still valid, but there's no data in the partitions.

The partitions are "logical" contrivances of the OS.  The RAID controller has no way to know how the OS will interpret data on the array, so the RAID controller can't modify that interpretation to "fool" the OS.  I don't see how the controller would be able to do what you're thinking it's doing, and if it did, how that would protect anything. 

Mostly, when a controller thinks that data integrity is at risk it'll write-protect but still allow full read access.  My guess, you've lost your data. 

Wait until the replacement drive shows up, rebuild the array with that drive and then if the data doesn't come back, restore from Backup.

197672[/snapback]

Thanks for your input! I have 2 logical drives dividing up the RAID 5 array because I run windows XP on the first and FreeBSD on the 2nd. This makes my life much easier when it comes to backing up the system, I just back up the whole drive(s). I was sure that I would be able to use the array in degraded mode, like you confirmed. However, now that both logical drives have blank MBRs, (why I wonder?) I'll have to restore. I was going to wait anyhow on the new drive from Maxtor and rebuild the array. Very strange though.

Share this post


Link to post
Share on other sites
Ummm.  RAID-5 with 4 36GB HDDs.  HDD ID#3 fails.  This should leave a 3 drive RAID-5 in fully operational, but "degraded" (not fault tolerant) state.  The array should still be functioning.

You say you see 2 logical drives.  I assume you had the 4-drive RAID-5 partitioned in to two separate logical drives (why?).  Now, these two logical drives appear as unallocated space.  So, you know the partition table is still valid, but there's no data in the partitions.

The partitions are "logical" contrivances of the OS.  The RAID controller has no way to know how the OS will interpret data on the array, so the RAID controller can't modify that interpretation to "fool" the OS.  I don't see how the controller would be able to do what you're thinking it's doing, and if it did, how that would protect anything. 

Mostly, when a controller thinks that data integrity is at risk it'll write-protect but still allow full read access.  My guess, you've lost your data. 

Wait until the replacement drive shows up, rebuild the array with that drive and then if the data doesn't come back, restore from Backup.

197672[/snapback]

Thanks for your input! I have 2 logical drives dividing up the RAID 5 array because I run windows XP on the first and FreeBSD on the 2nd. This makes my life much easier when it comes to backing up the system, I just back up the whole drive(s). I was sure that I would be able to use the array in degraded mode, like you confirmed. However, now that both logical drives have blank MBRs, (why I wonder?) I'll have to restore. I was going to wait anyhow on the new drive from Maxtor and rebuild the array. Very strange though.

197675[/snapback]

Well I did find that with a new scsi cable I was able to force the "failed" drive online. Its been working fine for 24 hours now. Once I did that the logical drives were no longer blank. Very strange.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0