So I took it upon myself to test the WD20EARS drives myself on a 9650SE-16ML card. First, some background may help. I had 16 WD 1TB green drives running fine in the RAID6 array for about 2 years. I kept the machine on 24/7 and for the most part, had no problems with the green drives. One drive squealed if it was on for too long. Another drive kept being degraded because it had ECC errors. But those issues were attributable to the specific drives and not the RAID card. By the way, I did enable them all with TLER.
So anyhow, I bought 12 of the WD20EARS drives. I used 7 of them to copy 2TB of data to temporarily. I did not jumper because I was using W7 x64. Then I took down the raid array. Then copied the data back from the 7x2TB drives to 14 individual 1TB drives. Now this is where it got interesting. I then proceeded to use WD's tool to zero out the 7 2TB drives. I think 3 of them zero'ed out just fine. The other 4 had some bad sectors on them that made the WD tool hiccup. (Quick aside - WD really needs to update their tool to allow skipping of bad sectors. I'll leave the PC on for 5 hours, come back, and realize the program was waiting for me to push YES to continue. Terrible.) Anyhow, the 4 eventually did zero out just fine once I skipped a couple of bad sectors.
After zero'ing out the 7 drives, I then jumpered all 12 of them because I didn't want to even take a chance with the 4K sectors in the 3ware RAID card. I also ran the WDIDLE and set each drive to 300 seconds. I then proceeded to plug all 12 drives into the 3WARE RAID card and create a new RAID6 unit. It took overnight to complete, but in the morning, one of the drives was in a degraded state (1 of the bad 7 drives). I pulled the drive and then ran WD's SMART test on it, to which it failed. So now I'm in the process of RMA'ing that drive. Although the array was in a degraded state because of the missing drive, it was still usable. The WD advanced RMA was going to take a while, so I decided to redo the array with just 11 drives and use the 12th drive as a spare. I'm happy to report that the other 11 drives have been working fine for almost a week on the RAID array running 24/7.
Verdict - The WD20EARS drives DO WORK in the 9650SE RAID card. I only tested them jumpered, but why risk not jumpering them?
I'll report back in a month or so to let you all know if any of the drives fall out of the array. I had done a lot of internet research before I bought the drives, and I believe I'm the only one so far that has tested [and reported] that the drives do work on the 3ware card. I remember talking to 3rd level LSI support just a month ago, and he briefly mentioned that they tried it and had to send all the drives back (I didn't ask him why at the time). Then I emailed him a couple of weeks later, when I was about to buy the drives, and he told me they hadn't even tested the drives. Go figure. So anyway, I'm glad the drives worked out, but I'd still like to give a special middle finger to LSI for not testing this out themselves and letting the consumers having to fend for themselves. This will be the last 3ware/LSI product that I'll ever own.
Hey, just curious to hear if you are still running that array and if you are, hows it been treating you?
I've got the 3ware 9650SE-16ML with 13x WD20EARS connected, initial inisialization process on creating a RAID6 array of all drives caused the process to hang on 3% inisialising, then after about 10minutes one of the WD20EARS becomes marked as DEGRADED, i've tried this a few times and the same drive keeps becoming degraded so i'm planing on RMA'ing it just as soon as the stores open.
So if anyone is interested i can keep updating here with my results on this project.