Sign in to follow this  
JonDfox

Can you install Windows 7 Pro on software RAID 0?

Recommended Posts

Hi

what do you mean by Software RAID ?

If it is by windows dynamic disks the answer is no! only mirror (at least I was able to do it back with win 2003)

If it is a Mobo chipset then most probably you also have drivers that you can load during the windows install. Then windows will see the drives as one regular drive and will install on it (first create the drive in the controller BIOS)

Please note that stripping is very very dangerous...performance wise you're better off with a cheap SSD.

m a r c

Edited by lecaf

Share this post


Link to post
Share on other sites

> Is this possible?

No.

The settings for a "software RAID" must be read by the OS

from a working C: system partition.

A motherboard BIOS won't know what to do with such a "software RAID"

which is very different from a RAID managed by the motherboard's chipset

e.g. ICH10R.

Share this post


Link to post
Share on other sites

> Please note that striping is very very dangerous (NOT spelled "stripping")

I dispute that claim chiefly because:

(1) each drive in a 2-member RAID 0 array experiences exactly ONE-HALF

of the normal wear that a single JBOD drive would experience with the same workload;

(2) each drive in a 4-member RAID 0 array experiences exactly ONE-FOURTH

of the normal wear that a single JBOD drive would experience with the same workload;

(3) when one drive in a RAID 0 array fails, it must be replaced;

as such, this is no different from replacing a single JBOD drive when it fails

("same difference" as we used to say in grad school).

I don't exactly know where all this fear-mongering about RAID 0 arrays

originated.

We use lots of RAID 0 arrays, and they work fine -- both chipset controlled

and third-party controller.

And, building RAID 0 arrays with SSDs should increase their longevity

(see (1) and (2) above), but be aware that not all Intel chipsets

support the TRIM command with RAID 0 arrays, and few if any

third-party controllers support TRIM.

The latter is why we recommend that you take note of the scores "After 30 Min. Idle" here:

http://www.xbitlabs.com/articles/storage/display/toshiba-thnsnh_5.html#sect0

Edited by MRFS

Share this post


Link to post
Share on other sites

I agree with MRFS that there is no reason to shun RAID 0, provided that you are prepared for possible failure. Be sure to use good matched drives (i.e. Caviar Black or better).

In any case, I strongly suggest that you use Acronis Trueimage or similar software to image your boot drive regularly. I once encountered a failure with a RAID 0 failure on my boot drive, but I was able to get back up and running in less than 45 minutes because I had a bootable Acronis disk and a recent image on a backup disk.

It's a good idea to keep your important data files or a different (preferably mirrored) drive, and use an online backup system.

Edited by dietrc70

Share this post


Link to post
Share on other sites

> Caviar Black

No.

WDC's Caviar Black HDDs do NOT support TLER (time-limited error recovery);

as such, those HDDs may be dropped by a RAID controller if/when they do NOT

respond timely enough to routine "polling" requests issued by the controller.

For example, I've seen passionate complaints posted at Newegg

by users who configured RAID arrays with multiple Caviar Black HDDs,

and the "polling" error only seems to appear after those arrays

start to fill up with lots of data, because internal error recovery logic

is necessarily taking more time with the increase in stored data

on those Caviar Blacks!!

Therefore, the "error" was the User's fault for adding

multiple Caviar Blacks to a RAID array, when those HDDs

are designed by WDC to operate in JBOD mode ONLY!!

WDC recommends their "RAID Edition" HDDs for all RAID arrays

because those "RE" HDDs all support TLER.

> Acronis True Image

The free version of Acronis True Image Western Digital Edition

will create a bootable disc, but that software does NOT

include third-party RAID drivers, as far as I know.

I tried that recently, and there is no "F6" functionality

in the software written to a bootable disc by that Edition of Acronis.

That functionality should be in the retail version of Acronis, however:

F6 functionality is in the Symantec GHOST versions that we use:

i.e. Versions 9 and 10: the bootable disc does query the User

for third-party device drivers.

check with Acronis Tech Support, to be sure that Acronis supports F6 functionality.

Over time, we've resorted to this schema:

|... C: .... | ... D: ...................... | <--- 2 NTFS partitions on primary drive

|... E: .... | ... F: ..........| <--- 2 NTFS partitions on secondary or tertiary drive

Where,

C: is the first NTFS partition on the primary drive, typically 30GB - 50GB

E: is the first NTFS partition on a secondary or tertiary drive, exact same size as C:

D: is the NTFS data partition on the primary drive

F: is the NTFS data partition on the secondary or tertiary drive

Drive images of C: are initially written to F -- to minimize head thrashing --

then copied from F: to D: for redundancy.

All of our workstations have at least 4 x HDDs, so we've written

a simple BATCH program that does the copying automatically,

e.g. by appending a serial number to the target folder:

images.001, images.002, images.003 etc.

The latest drive image is then restored to E:, so that the PC is

bootable from both C: and E: . This task can be done

with Acronis True Image Western Digital Edition,

provided that there is at least one WDC HDD installed

on your PC: if not, that edition of Acronis will NOT install.

We've found that restoring a drive image is usually much

faster when it is executed as a Windows task and

NOT executed from a bootable disc: also, executing the

restore as a Windows task will usually NOT run afoul

of any third-party RAID drivers that are required to

do I/O with a RAID array, and also there is no need to

modify the BIOS to boot from an optical drive.

You will probably need to modify the motherboard's BIOS

to boot from E: instead of booting from C: .

By comparison, the bootable disc that comes with

older versions of Symantec GHOST takes a very

long time to initialize.

Hope this helps.

Edited by MRFS
  • Like 1

Share this post


Link to post
Share on other sites

> Please note that striping is very very dangerous (NOT spelled "stripping")

Hmm maybe I've been to too many strip clubs lately ...and stripping is dangerous you can catch a cold, it's winter time after all.

> I dispute that claim chiefly because:

...

wear

...

I would agree with your analysis if all hard disks were born equal against wear. I've yet to see any array (Raid 1 or 5 or 6 or whatever), where all drives broke in the same time period. The first can die in months the next in years...

> I don't exactly know where all this fear-mongering about RAID 0 arrays originated.

In the old days had a Athlon 64 with a raid 0 as a boot drive (windows 2000 upgraded later to XP), while performance was superb, one ugly day I did lose the array, and my OS was a goner. Wasn't a major blow as no data was irreplaceable (did plan for that), but re downloading and re-installing Gigs of Steam games is time consuming and no fun. So I guess my fear-mongering origin is ... my own personal experience.

For an enterprise, raid 0 (database logs a good example) yes I'm for it, as long as disaster recovery is planned and down time mitigated. Don't think that was the scope of Jonas question.

m a r c

Edited by lecaf

Share this post


Link to post
Share on other sites

> if all hard disks were born equal against wear

> all drives broke in the same time period

I believe you are begging the question with those 2 conditions.

Statistically speaking, both are true with an expectation that exceeds ~95%

particularly when enterprise-class HDDs are members of a RAID array

e.g. Western Digital RAID Edition.

(I will discount RAID arrays built with WDC's Caviar Blacks, because

the latter HDDs do NOT support TLER, and WDC expressly warns

against using Caviar Blacks in RAID arrays, for that reason.

Therefore, the probability of a failure using Caviar Blacks in a RAID array

is HIGH chiefly because error recovery logic is known to cause a controller to

drop one of those HDDs from a RAID array when it does not respond

timely to a routine "polling" request. I call that a "User Error".)

When enterprise-class HHDs are supplied with clean and reliable input power, and

when they are properly cooled with humidity control and

when they are routinely cleaned of accumulated dust and

when they are not vibrated excessively (e.g. transported a lot

on bumpy highways), THEN it is reasonable to expect about

5 in every 100 HDDs to fail during their factory warranty periods.

That is another way of saying that approximately 95 out of every 100

enterprise-class HDDs are born equal against wear during their

factory warranty periods, and none of those 95 can be expected

to break during their factory warranty periods.

The other 5 are NOT born equal against wear and

they should be expected to break during their

factory warranty periods.

If any of the conditions above are NOT true e.g.

if HDD subsystems are not supplied with clean UPS power,

or they are NOT constantly and properly cooled

or they are subjected to excessive vibration,

the statistical probability of a failure goes UP.

I should add here that an OS partition, like the Windows C:

system partition, should also be defragmented on a regular schedule

if hosted with HDDs, so as to minimize armature thrashing

and thereby minimize wear on that key servo mechanism.

"Short-stroked" C: partitions also perform much better

because the outermost tracks store more data than

the innermost tracks, and the outermost tracks require

much less movement of the read/write armature.

Lastly, a RAID 0 array with 2 members will, over time,

subject each member drive to 50% of the wear

that a JBOD drive would experience with the same workload

as measured by total number of IOPS (input/output operations).

And, it's just bad planning to store only one copy of

any data on a RAID array, just as it is also bad planning

to store only one copy of any data on a JBOD HDD.

All data should be stored redundantly so as to

render it "independent" of hardware failure,

because we know that hardware fails and,

when it does, a backup plan can be devised

which replaces the failed drive with a new one

and all data that was on the failed drive is

restored to the new one.

In decision theory, we plan for the worst possible case,

and that means we can survive any expected disaster.

An example of "worst possible case" would be a major

fire that completely consumed your building(s) and

everything in it before the fire could be extinguished.

And, that Plan should also anticipate likely events

that fall somewhere between "best possible case"

and "worst possible case".

My 4 cents :)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this