Sign in to follow this  
tdwyer

1 Terabyte home pc advice

Recommended Posts

Hi,

I would like to build a terabyte pc for home use (Its for testing large database applications) and would like some advice. Basically I set myself the critieria of 1tb of storage, 1gb ram and 2 cpu. Hopefully for under £3000.

Having done the research I have come up with the following. (Also because I do not know which case/psu configuration that I can use I have dropped the spec slightly and also I am using some of my current pc's parts)

1 * Asus A7M266-D Dual Athlon

2 * AMD Athlon MP1900+

2 * TaiSol CGK760092 CPU Cooler

1 * RocketRaid 404 4Ch ATA133 Raid Host Adaptor

1 * Coolermaster ATCS-200 (No PSU) (CA-003-CM) - I already own this case

4 * Western Digital Caviar 120GB Special Edition (HD-001-WD) - I was going to use the 160gb drives but saw these had 8mb cache.

2 * Crucial 512MB DDR PC2100 CAS-2.5 (MY-006-CR)

As you can see I have not acheived the terabtye I was hoping for because I do not know how to power the drives with the current config.

Is this an ok config?

Thanks for the advice

Best Regards

Trevor

Share this post


Link to post
Share on other sites

There's really not that much too it. You just need a case big enough to physically put the drives in.

For power supplies, consider that most IDE drives consume somewhere on the order of about 10-12 watts, so even 8 drives (8*120GB = 960GB) uses less then 100watts. Any good quality 450watt or higher (enermax has a nice 550watt unit) power supply will be fine. With the dual Athlons, the Enermax 550 is probably a good idea. BTW: It's called a 650Watt unit, but it actually just 550watts as the 650 rating is for peak startup current.

If you want to do hardware raid, look at the 8 channel 3Ware 6800, 7810 or 7850. You can find the 6800 cards on ebay for around $150 while the 7850 runs about 5 bills. Otherwise if you just want software raid, you can use any combination that gets you 8 drive connections. A couple of basic ATA100 cards will provide 8 drives and will cost you less then $100.

Share this post


Link to post
Share on other sites
I would like to build a terabyte pc for home use (Its for testing large database applications)

Riiight. It's REALLY so you can point at your home PC and say, "You know, there's a terabyte of storage in there." But just to save face for ya, I'll go along with your database story. ;)

Peace

policy

Share this post


Link to post
Share on other sites

Surely you're not considering a 1T database without raid! If you stick to WD1200xBs, you're limited to about .75T in RAID5 (use the 3ware 7850) and about 450G in RAID10. Using 160G Maxtror drives wil get you to .9T in RAID5 or .5T in RAID10 but wastes about 20G per drive. Hardware RAID controllers supporting ATA/133 channels and drives larger than 128G are on the way, as are controllers with more than 8 channels.

If a full 1T of storage is a firm requirement, you will have to either use SCSI or a software solution of one form or another to connect multiple RAID arrays into one large volume. For all I know, any decent database will do this automatically...

Share this post


Link to post
Share on other sites

Um... Actually all the 3Ware cards support ATA133 and drives > 128GB. It was just a firmware upgrade for older cards.

Share this post


Link to post
Share on other sites

tdwyer

there is a huge difference between building a Terabyte storage area and building a Terabyte database.

I am surprised at the advice you receive above, at a forum of the quality of storagereview. Their advice would serve for building a 1TB fileserver. If you are serious about playing around with 1TB of even 100GB databases you need a large databaseserver instead of a fileserver.

I have had some experiences with smaller databases < 1GB and am in the process now of finishing my dream-machine database server: a dual A7M266-D, 2 * XP1700+, 1GB Samsung PC2700 and a Compaq 3200 array scsi raid controller with 6 Fujitsu MAJ’s (10.000rpm).

I have read the book Inside MS SQL Server 7, which I can recommend to you when you want to start on large databases (better use SQL server 2000). From reading this and my own experience, even if a database fits into the main memory of the computer, a database will make reads and writes to the disk. In case of database of a 1TB database being 1000 times larger then the main memory it will have to read/write almost everything to disk (tip buy as much internal memory as you can: you can stuck up to 3.5GB in a A7M266-D).

In a large database the transaction table usually occupies the most (disk)space. This transaction table should be normalised meaning it will contains foreign keys in the form of integers and maybe data-fields and some amount value fields. Making a record as small as 100 bytes! If you want to fill up a 1TB database with these records, you end up with 10.000.000.000 records. If you, for example, perform a select on this table for all records of product 321, the database will first access the product index of the transaction table file, on the disk, to find the position of the first record of product 321 and then reposition the disk heads to this position in the transaction table file to read this record, then go back, reposition the disk heads to the index file, to lookup the position of the next record in the transaction table of product 321, to this position in the transaction table file to read this record, etcetera, etcetera.

So Unlike a file server, reading and writing huge continues data in the form of large files, a database server will make small read/write operations (records) randomly across the disk. The number of these small read/write operations per second, approximated by I/O’sec, the pc or server can produce, will largely determine the database performance. That’s why hard disks for database servers aren’t tested by their continuous MB/sec but by their IO’s; see the IO-meter benchmarks tests.

Adding up a couple of large IDE drive in software raid or with cheap (Promise/rocket raid) ide-raid controllers adds up to a storage solution that does scale in size and continuous throughput, MB’sec, but does not in I/O’s; read SR’s review of the promise 66/100/133 controllers. The 3ware Escalade controllers do scale regarding to I/O’sec if you are using IBM’s GXP’s: from all IDE drives, IBM best optimised their firmware for high I/O’sec. But reliability of this drives doesn’t make it first choice among SR members. Other drives, like the 8mb cach WD’s, do perform well in single drive desktop benchmarks but don’t in I/O’sec in large IDE-raid arrays.

Even IDE’s best combo, IBM GXP-3ware, is no match for SCSI. IDE drives have around 150 I/O’sec in the IOmeter database benchmark (threads=256). I got 270 I/O'sec with 1 SCSI Fujitsu MAJ drive, the new 15.000 rpm SCSI drive do almost 400 I/O'sec and I got 1000 I/O’sec with 3 drives at the Compaq 3200 array controller: something the 3ware 6800 couldn’t even achieve with 8 drives: http://www.3ware.com/products/pdf/Benchmark.pdf Link. Having 6 or 8 scsi drives on a good scsi raid controller will probably get you 2000 I/O'sec; compared to a louzy 150 I/O'sec of promise or software ide-raid setup. The SCSI protocol and SCSI Raid controllers are so much geared towards heavy load server performance that they are on a completely other level then the current (best performing) IDE-solution, which is geared towards desktop usage (maybe this will change with serial-ata). In no way can IDE deliver the I/O'sec you need for a TB database!

Just take a look how machines build for large database are made up of: www.tpc.org (open the pdf disclosure documents which show at the pricing detail page the machine parts of the server which ran the tpc database benchmark).

This one, http://www.microsoft.com/sql/techinfo/BI/t...erabytecube.asp, will shows a 1TB database test of Microsoft and Unisys, although I do suspect that a ES7000 beyond your 3000 pound budget. The storage sollution; drives + raid controller make up for the majority of the budget of these machines. A 150 I/O'sec ide-solution will bottleneck processing a 1TB database

My advice:

Go for 2GB or more memory (memory is cheap now; 512MB ECC registrated for around 150$); a database uses memory as cach; more cach means less disk access, which vastly increase the database speed. For example if the indexes could be held in main memory this would save the I/O to the index files.

Go for a big scsi storage array which is I/O optimised: don't use raid5!. If you have friends in the US, buy at US online stores: SCSI drives and controllers are much cheaper in the US. You can buy SCSI raid controllers in the US form $250, mine 3200 costs $400. 10.000rpm SCSI drives can be bought occasionally, in dumps from retailers of a eBay for $100. Making a SCSI 10 drive raid array for $1200-$1500. Which leaves you with 2000 pound, 2800$, for the rest of your system.

If you buy the other parts, MB, proc, memory, case+PWS, cd, cdrw, rest, in the US as well you will need around $1700, leaving you with (3000 pound * 1,4 pound/$=$4200-$1500-$1700)=$1000 which I would spend on upgrading to 3.5GB and adding more disks, or buying more expensive 15000rpm drive instead of the 10000rpm drives.

Let me know how you are doing testing your 1TB database. As you can see on the MS/UNiSYS link, a 32 proc ES7000 with a $1/$2 milion EMC storage sollution, took a couple of hours to proces a 1TB database! So running a 1TB on a 'home-made-budget-pc-server' is like hunt ellifants (no offence 8O ment)

Share this post


Link to post
Share on other sites

tdwyer

there is a huge difference between building a Terabyte storage area and building a Terabyte database.

I am surprised at the advice you receive above, at a forum of the quality of storagereview. Their advice would serve for building a 1TB fileserver. If you are serious about playing around with 1TB of even 100GB databases you need a large databaseserver instead of a fileserver.

I have had some experiences with smaller databases < 1GB and am in the process now of finishing my dream-machine database server: a dual A7M266-D, 2 * XP1700+, 1GB Samsung PC2700 and a Compaq 3200 array scsi raid controller with 6 Fujitsu MAJ’s (10.000rpm).

I have read the book Inside MS SQL Server 7, which I can recommend to you when you want to start on large databases (better use SQL server 2000). From reading this and my own experience, even if a database fits into the main memory of the computer, a database will make reads and writes to the disk. In case of database of a 1TB database being 1000 times larger then the main memory it will have to read/write almost everything to disk (tip buy as much internal memory as you can: you can stuck up to 3.5GB in a A7M266-D).

In a large database the transaction table usually occupies the most (disk)space. This transaction table should be normalised meaning it will contains foreign keys in the form of integers and maybe data-fields and some amount value fields. Making a record as small as 100 bytes! If you want to fill up a 1TB database with these records, you end up with 10.000.000.000 records. If you, for example, perform a select on this table for all records of product 321, the database will first access the product index of the transaction table file, on the disk, to find the position of the first record of product 321 and then reposition the disk heads to this position in the transaction table file to read this record, then go back, reposition the disk heads to the index file, to lookup the position of the next record in the transaction table of product 321, to this position in the transaction table file to read this record, etcetera, etcetera.

So Unlike a file server, reading and writing huge continues data in the form of large files, a database server will make small read/write operations (records) randomly across the disk. The number of these small read/write operations per second, approximated by I/O’sec, the pc or server can produce, will largely determine the database performance. That’s why hard disks for database servers aren’t tested by their continuous MB/sec but by their IO’s; see the IO-meter benchmarks tests.

Adding up a couple of large IDE drive in software raid or with cheap (Promise/rocket raid) ide-raid controllers adds up to a storage solution that does scale in size and continuous throughput, MB’sec, but does not in I/O’s; read SR’s review of the promise 66/100/133 controllers. The 3ware Escalade controllers do scale regarding to I/O’sec if you are using IBM’s GXP’s: from all IDE drives, IBM best optimised their firmware for high I/O’sec. But reliability of this drives doesn’t make it first choice among SR members. Other drives, like the 8mb cach WD’s, do perform well in single drive desktop benchmarks but don’t in I/O’sec in large IDE-raid arrays.

Even IDE’s best combo, IBM GXP-3ware, is no match for SCSI. IDE drives have around 150 I/O’sec in the IOmeter database benchmark (threads=256). I got 270 I/O'sec with 1 SCSI Fujitsu MAJ drive, the new 15.000 rpm SCSI drive do almost 400 I/O'sec and I got 1000 I/O’sec with 3 drives at the Compaq 3200 array controller: something the 3ware 6800 couldn’t even achieve with 8 drives: http://www.3ware.com/products/pdf/Benchmark.pdf Link. Having 6 or 8 scsi drives on a good scsi raid controller will probably get you 2000 I/O'sec; compared to a louzy 150 I/O'sec of promise or software ide-raid setup. The SCSI protocol and SCSI Raid controllers are so much geared towards heavy load server performance that they are on a completely other level then the current (best performing) IDE-solution, which is geared towards desktop usage (maybe this will change with serial-ata). In no way can IDE deliver the I/O'sec you need for a TB database!

Just take a look how machines build for large database are made up of: www.tpc.org (open the pdf disclosure documents which show at the pricing detail page the machine parts of the server which ran the tpc database benchmark).

This one, http://www.microsoft.com/sql/techinfo/BI/t...erabytecube.asp, will shows a 1TB database test of Microsoft and Unisys, although I do suspect that a ES7000 beyond your 3000 pound budget. The storage sollution; drives + raid controller make up for the majority of the budget of these machines. A 150 I/O'sec ide-solution will bottleneck processing a 1TB database

My advice:

Go for 2GB or more memory (memory is cheap now; 512MB ECC registrated for around 150$); a database uses memory as cach; more cach means less disk access, which vastly increase the database speed. For example if the indexes could be held in main memory this would save the I/O to the index files.

Go for a big scsi storage array which is I/O optimised: don't use raid5!. If you have friends in the US, buy at US online stores: SCSI drives and controllers are much cheaper in the US. You can buy SCSI raid controllers in the US form $250, mine 3200 costs $400. 10.000rpm SCSI drives can be bought occasionally, in dumps from retailers of a eBay for $100. Making a SCSI 10 drive raid array for $1200-$1500. Which leaves you with 2000 pound, 2800$, for the rest of your system.

If you buy the other parts, MB, proc, memory, case+PWS, cd, cdrw, rest, in the US as well you will need around $1700, leaving you with (3000 pound * 1,4 pound/$=$4200-$1500-$1700)=$1000 which I would spend on upgrading to 3.5GB and adding more disks, or buying more expensive 15000rpm drive instead of the 10000rpm drives.

Let me know how you are doing testing your 1TB database. As you can see on the MS/UNISYS link, a 32 proc ES7000 with a $1/$2 million EMC storage solution, took a couple of hours to process a 1TB database! So running a 1TB on a 'home-made-budget-pc-server' is like hunting elephants with a pepper-spray bus (no offence ment). Email me at Oudkerk@wishmail.net

Share this post


Link to post
Share on other sites

Thanks very much for the detailed reply - acutally a friend of mine wrote the Inside SQL 2000 books - Kalen Delany :) I get a mention in the preface and she used me as test data :)

I appreiate all the problems I am going to face and performance will be a problem. The thing is I am on a tight budget and am not too worried how well it performs - I will know that when implemented at a customer things can only get better.

HAve you read Jim Grays's aticles on cheap $10,000 < terabyte system on the Microsoft research site - its really interesting to see the performance differecnes between SCSI and IDE

Thanks again

Trevor

Share this post


Link to post
Share on other sites

Probably the biggest challange you are going to face is how to get your 8 ide drives within 18 inches of your controller. I suspect a cube style server case is going to be your best option.

You correctely identified your second challange - your power supply. SCSI drives offer a "delayed start" option to avoid all drives starting to spin at the same time. A typical 7200 rpm drive will draw 1.75 amps off the 12 volt rail when starting then drop down to about 1/2 an amp to run. Start all 8 drives at the same time and you will need to a power supply that can put out 14 amps of 12 volts right as the system is starting. I do not know if the a7m266d derives vcore from +12 or from +5, but if it is using +12, you can figure another 10 amps of 12 volts for the processors. None of your basic off the shelf PC power supplies are gonna give you 24 amps of +12. All too many people have the "you buy a power supply by the watts" mentality that is just plain wrong. Total watts is only one of the 7 factors you must consider when selecting a power supply.

I have not seen any benchmarks on the a7m266's pci throughput. You might want to find some before you settle on a system board.

As for data base vs file server vs ... I can't help you there.

Share this post


Link to post
Share on other sites
Probably the biggest challange you are going to face is how to get your 8 ide drives within 18 inches of your controller.  I suspect a cube style server case is going to be your best option.

This isn't always the "case", pardon the pun.

I recently replaced my InWin Q500 with a CK-1100 from www.caseoutlet.com. My primary concerns were airflow and drive space; I use hot-swap bays for my hdds.. having four hdds and two CDRW's, the InWin which is a really great full-tower case, only has 5 external 5.25" drive bays.

Point is, cable routing is a much bigger deal than I anticipated in a cube case. There is only one cable routing port in this case, way up at the top. The PS sits near the bottom on the right-hand side, and the ATX connector from my 400W Antec wouldn't reach. I spliced in about a foot of cable when the case first arrived as an "emergency measure" and purchased a True550 today, which has a cable just long enough.

My 5+1 SCSI cable wouldn't reach either, and I had to buy one with an extra connector that I can't use just to reach all the drives on the other side of the case. Why they put so much space between the drive connectors and so little (almost the same amount) between the first drive connector and the connector for the controller is beyond me.. LVD can go 15m, get with it guys. ;)

You correctely identified your second challange - your power supply.  SCSI drives offer a "delayed start" option to avoid all drives starting to spin at the same time.  A typical 7200 rpm drive will draw 1.75 amps off the 12 volt rail when starting then drop down to about 1/2 an amp to run.  Start all 8 drives at the same time and you will need to a power supply that can put out 14 amps of 12 volts right as the system is starting.  I do not know if the a7m266d derives vcore from +12 or from +5, but if it is using +12, you can figure another 10 amps of 12 volts for the processors.  None of your basic off the shelf PC power supplies are gonna give you 24 amps of +12.  All too many people have the "you buy a power supply by the watts" mentality that is just plain wrong.  Total watts is only one of the 7 factors you must consider when selecting a power supply.

The True550 I mentioned above I would consider a "basic off the shelf" supply -- I bought mine today at CompUSA.. doesn't get any more "off the shelf" than that; they don't even carry 68pin LVD cables.

It provides exactly 24A on the +12, 40A on the +5, and 32A on the +3.3. Ran about $140.

I think 24A on the +12 is fine anyway.. other system components are not up to max draw during bootup.. things like graphics card, processors, etc are mostly idle until OS boot.

Share this post


Link to post
Share on other sites
Doh.. how much is £3000 anyways? I'd thought that was US dollars on first glance.

Pound = 1.45 dollars * 3000 = $4350

Share this post


Link to post
Share on other sites

My two cents

I would also favor for the 3ware 7850 card in any large raid, thats if you have a 66Mhz Pci bus. 8O

Share this post


Link to post
Share on other sites

tdwyer

Well if you know one of the writers of Inside SQL Server 7, your are in good companionship! Good to know you are using the SQL Server. I still have to read the last chapters of the book, although I have already upgraded to the 2000 version. I skip the "Inside SQL Server 2000" version but probably will buy the "Inside SQL Server Yukan" version as soon as it comes out; summer 2003 probably. Don’t tell me you are going to test your 1TB database on an alpha/beta release of Yukan 8) ?

In one of the first chapters of the book, hardware is discussed. Having a high I/O’sec system is highly recommended by the book. Using IDE was highly disrecommended. The book clearly states that having the wrong disk-subsystem will bottleneck your whole system. The whole paging concept of SQL Server also reveals a lot about performance related to memory/disk I/O. In the database file chapter you can read how you can divide you database over different files; multiple .mdb datafiles, .ldf log file and the tempdb database files. All of which you can place on different disk-arrays to enhance performance.

I think I have read the MS research report, but I think they did compare 7200rpm IDE to 7200rpm SCSI which is an obsolete comparison because 15.000rpm SCSI drives are available now. Could you show the direct link to this document?

Regarding to which RAID to choose: RAID 5 is very bad for random write speed. Just for testing fault tolerance of your system isn’t that critical, RAID-0 will give you the most bang for the buck, RAID-1/0 is ideal, performance+fault tolerance, but twice as expensive.

Enermax PWS are known for a weak 5V line. Antec’s new True Power 550 PWS has probably more stable lines. You can use multiple PWS’es: 1 for the MB and rest and a separate one for the drives. You can find in this forum and on 2cpu.com people who are running 2PWS machines. I used a OEM True Power 550PWS with an Chemning full server tower in which you can store +/- 10 to 12 drives.

Are you going to build a 1TB transactional database, testing simulation of as many transactions as possible or are you going to test a 1 TB DSS database?

Share this post


Link to post
Share on other sites

You have read the IDE raid documents - this is actually what got me started - I visited Jim Grays lab at BARC and he demonstrated his personal '1tb desktop pc' - this was some time ago though.

I will be testing pure DSS - basically loading as large a fact table for the datawarehouse as possible and building cubes from this - thats why I am not too bothered with pure performance as I appreciate whatever I use is going to be slow. I just need to get the approach correct, then I can scale the ideas upto the correct configurations - eg. the es7000

Inside SQL 2000 does have some good additons - I thought it was worthwhile but then I ended up buyingtwo copies :)

An interesting aside - have you seen the Rosetta and/or the Skyserver project that is running on SQL Server - Rosetta will be upto 100tb in two years time !

Best Regards

Trevor

Share this post


Link to post
Share on other sites

Trevor

Could you give me the link to the IDE RAID article?

Nice to know you are in the data warehouse area; this is where I use SQL Server for, and like it a lot. Next to this DSS databases need less I/O's from the storage sollution compared to OLTP databases and more processor power.

But if your relational db is 1TB, you need an additional space to store the cube. Analysis services cubes are ussually smaller in diskspace then the relational db, but you need room for temporary files too: tempdb, Analysis services scratch file and the NT swapfile. So if your relational db is indeed 1TB you probably need 1.5TB-2TB of harddisk space! How large is you .mdb and .ldf files exactly?

Read the Word document at this link for the hardware storage setup for the SQL files for a large OLAP cube http://www.microsoft.com/sql/techinfo/BI/C...ngOLAPsites.asp

This links shows some trade-off's between choosing the OLAP storage-mode: http://msdn.microsoft.com/library/default..../olapunisys.asp In a Unisys test it appeared that you can save a lot of processing time and diskspace by setting aggregation level to 30% instead of 100%, while end-user-query time hardly suffers from an only partially aggregated cube.

Although the MS IDE article I still have doubt wether the IDE raid array won't break down under the high load for, for IDE, extreme long continous usage time (37 hour or more).

At www.pricewatch.com you can get a 73gb Hitach 10.000rpm scsi drive now for 329. You would need 14 of these. Estimated total system price when you would buy at the supplier with the lowest price on pricewatch.com:

14 scsi drives 14*329 = 4606$

1 raid controller = 500$

2 scsi cables 2*100 = 200$

1 a7m266d = 200$

2 Athlon XP 1900 2 * 100 = 220$

2 CPU coolers 2 * 50 = 100$

4 512mb reg ecc modules 4 * 150 = 600$

1 Antec 550pws = 100$

1 Chemning Server tower = 100$

1 Cd rewriter = 70$

1 Cd player, floppy drive, keyb, mouse = 100$

========= +

6796$ = 4700 Britisch Pounds

Share this post


Link to post
Share on other sites
I will be testing pure DSS - basically loading as large a fact table for the datawarehouse as possible and building cubes from this - thats why I am not too bothered with pure performance as I appreciate whatever I use is going to be slow. I just need to get the approach correct, then I can scale the ideas upto the correct configurations - eg. the es7000

I am in the last stage of tweaking a Microsoft Analysis Services solution. My experience has been along the lines of what DataBase Freak has been saying in this thread. Your performance will be determined by:

1) I/O subsystem (disks, RAID config, PCI bus(es)).

2) Memory subsystem (size, latency and bandwidth).

3) Software edition (Enterprise, Standard, Advanced Server, etc.)

4) CPUs (cache size, megahurtz, etc.)

in that order. Maybe 3) can play a big role if your solution fits into certain sizing bands.

Your cube design, build and some level of aggregations for a fact table that is 10GB running on an AMD based solution with IDE RAID will take you several hours to days depending on the number of dimensions and the number of members in each of those dimensions. If you have any distinct count measures in your cube, I shudder to think what it will take to run.

If what you say is true (re: testing DSS design), then I would use a low disk capacity machine with a test dataset that limits the size of the fact table and the number of members in each dimension. For example, try to create a uniformly distributed subset using one year from a dataset of 10 years - that should reduce your fact table to 10% of it's size for testing and the time dimension would have one member for year instead of 10.

I am of the opinion that using the brute force approach of throwing hardware at a design testing problem is a misguided effort.

Also, for a problem of your size, please keep in mind that Microsoft Analysis Services can address up to 3 GB of physical memory - this will effect your final deployment in the real-world. There are OLAP servers that are 64 bit capable that can address several GB of physical memory. Alternately, you can use partitioned cubes across a farm of enterprise class servers - what ever that is today.

Share this post


Link to post
Share on other sites

The articles can be found on the following site - http://research.microsoft.com/~Gray/

Interesting that Jim has recently posted an interesting article on portable terabricks :)

As for processing cubes using AMD and IDE - I currently have no problem processing 10 million row fact tables with my somewhat smaller system.

This dicussion is very much appreicated (seriously !) but I do know what I am facing - I will be using a mixture of real data and then to really scale testing up automatically generated data (from the tool on Jim site - DataGen)

Cheers

Trevor

Share this post


Link to post
Share on other sites

Trevor:

Wel think you have already made your mind up about IDE :( . I then would recommend 3ware, the 6800 are cheap, which will enable you to buy 2 to 4 controllers and a couple of more drives; I would go for the IBM 120GXP because they scale best in terms of I/O's in a raid-array.

Just put your source database .mbd-, .ldf- files, tempdb-, olapdatabase-, olapscratch- and NT swap files on different arrays (best on different controllers; with $150 for a 6800 this shoudn't be a problem). Placing them all on one big IDE array will be a nightmare!

Regarding the PWS I thing the Antec True PWS is fine: http://www.envynews.com/review.php?ID=97 probably need 2 of them.

Regarding dual AMD; when using two XP2100+, 1.73Ghz, you have pretty much the raw power of a 8 way 500mhz Xeon system; which companies bought only two years ago as a highend database server. The AMD processors are the best part of your system :!: I would, though, add 3.5GB memory to it, because memory is dirt cheap now and can take away a bit of the pain of the weak disk subsystem (IDE) by caching data. [/url]

Share this post


Link to post
Share on other sites

Any particular reason you're shooting for 1TB?

If that's something you're really set on, then the 3Ware 7850 is the obvious choice.

If you're still open to suggestion and can sacrifice some capacity, there are some other alternatives possibly worth considering, such as whether you really want RAID 5 or RAID 1+0. RAID 1+0 would offer better performance. RAID 5 would maximize your price/capacity ratio.

The Adaptec 2400a controller offers excellent performance in RAID 5, and can be equipped with a maximum of 128 MB of cache for about $60. Using the 2400a and 4 1200JB drives in RAID 5 would yield appx. 360 GB of storage.

The 3Ware 7850 can handle up to 8 drives, which would yield appx 480 GB of storage if you used 8 drives in a RAID 1+0 setup. Obviously, this would mean buying 8 drives as opposed to 4, so you'd have to pay twice as much for storage. Of course, you could also decide to use the 7850, 8 drives, and RAID 5, for appx 840 GB of storage.

Just some thoughts. I hope you find this helpful.

Share this post


Link to post
Share on other sites

the Adaptec 2400 is a 32bit 33Mhz Pci bus

The 3ware 7850 is a 64Bit 33Mhz Pci bus

the adaptec should top out just over 100MB/s

I have measured 3ware to 172MB/s

These are sequential rates, but I would bet the 3ware would still win hands down in raid 5

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this