IronFire

Large Decrese In Performance With Better Adapter ?

Recommended Posts

I have just bought a second hand an Adaptec 2100S SCSI RAID card to replace my Tekram DC-390U2B/W as I wanted to run my main boot disk in RAID 1 for security but it seems to have dramaticaly reduced performance, Is this possible ?

My System specs are :

Athlon 2500 +

1 Gig of DDR RAM

XP Pro

Epox Nforce 2 Mobo 8RDA3+

2 x Seagate Cheetah's ST318432LW

When I was running them as seperate disks they perfomed as expected in benchmarks and in general use, they achieved aproximaterly 60 mb/s max read, 40 mb/s min read and 50mb/s read average using HDTACH 2.70.

Now with them running in a RAID 1 array on the new 2100S with a fresh XP install they are getting aprox 32 mb/s max read, 23 min read and 30 average read. The system as a whole also feels slightly less snappy with windows taking marginaly longer to do tasks.

Is there any reason why changing to this seemingly better controller could cause nearly a halving in the perfomance or am I being blinded by untrue benchmarks ?

Any help apreciated.

Share this post


Link to post
Share on other sites
I have just bought a second hand an Adaptec 2100S SCSI RAID card to replace my Tekram DC-390U2B/W as I wanted to run my main boot disk in RAID 1 for security but it seems to have dramaticaly reduced performance, Is this possible ?

...

Is there any reason why changing to this seemingly better controller could cause nearly a halving in the perfomance or am I being blinded by untrue benchmarks ?

Yes.

You are under the belief that something that costs more $$$ must give better performance. This is sometimes true, but unfortunately not in the world of Adaptec SCSI RAID controllers. This forum is (or was) full of stories of people with horrible performance with the Adaptec 2100 and others of it's ilk.

Share this post


Link to post
Share on other sites

I wasn't under the impression that something that costs more should perform better, I was under the impression that becuase it had a much greater feature set and was alot newer it would perform as well as my existing card not nescersarily better but it doesn't seem to be doing so.

Would I be best off to revert back to my Tekram card and flog on the Adaptec ?

Share this post


Link to post
Share on other sites

Sell the Adaptec and get an LSI, Mylex, AMI controller. Really anything under than Adaptec would be better. Perhaps try software RAID 1? You would need a Server edition of Windows for it tho. Or just Ghost your running drive to your spare drive once a week or whatever, whilst using your current Tekram/LSI.

Share this post


Link to post
Share on other sites
Is there any reason why changing to this seemingly better controller could cause nearly a halving in the perfomance or am I being blinded by untrue benchmarks ?

I agree generally with Pradeep here, though I do have very solid Adaptec RAIDs with good performance. However, I think that it is worthwhile to point out that you should not assume RAID=Better performance. RAID processing requires some overhead, and you are only going to get better performance - offsetting that overhead - if the adapter compensates for the overhead and the disk configuration is intrinsically superior to the non-RAID comparison. Look carefully at RAID performance ads - usually they are over 10 spindles, often maxed out at 16 or 32 spindles. Two drives is barely a RAID. Your desire for redundancy is good, but it is doubtful that you will get an overall noticable performance boost, though you might be able to measure the difference on benchmarks.

Most RAID controllers are optimized for conservative data protection with many many spindles. It is very common for me to hear from clients that they are disappointed with RAID performance. Of course, they have invariably bought 3 74G drives for their RAID instead of 9 18G drives.

Right now you'd be better off returning the RAID controller and using the $ to install a Raptor for your OS drive. Make the SCSI drives data drives.

Share this post


Link to post
Share on other sites
Unfortunterly ghosting my drive every week relies on the human factor which tends to forget backups :/

No, you can just schedule backups - they'll happen without you.

Share this post


Link to post
Share on other sites

Surely as all the controller has to do is write identical copies of the data to each disk it isn't having to do much work, you would think it could do that easily, RAID 5 etc that it also supports well thats another matter.

Unfortunterly as I bought the card second hand the option for returning it doesn't exist so getting it to work would be great.

Overall my complaint is not with not getting better performance or even similar performance but getting half of what I had before.

If I do go for a Raptor what SATA card should I go for as before this I would have bought an Adaptec but now im not so sure.

Share this post


Link to post
Share on other sites

I personally like the Promise TX2+, TX4, or SX4 depending on what you want to do. Top performance but they don't support SMART monitoring fyi

Share this post


Link to post
Share on other sites

I am not up on HDTach and RAID specifically, so I can not comment on the accuracy of your results. In general the best performance you would expect RAID 1 to be about 1:1 or a bit worse for writes, and a fairly decent increase for reads, but that again will depend on the RAID and what it does with the data, configuration, etc. That is theory, and mostly just benchmark numbers - in practice I would not expect much, esp. in terms of anything noticable. What you might get in terms of performance during reads - parsing the data between the two drives - is different from striping, but effectively often still similar to STR increases, which usually do not do much for many tasks. See here for what I mean.

Between the specific adapter being not-muched loved (at least by performance hounds) and the questionable benchmark it is difficult to tease out exactly your problem, if there is one. Have you tried anything real-world to see the effect? If you can't tell the difference with a stopwatch and something you do frequently, it is probably not worth worrying about.

Share this post


Link to post
Share on other sites

I would ignore the benchmarks if the system felt as fast as it used to be it doesn't, windows takes longer to load items like the control panel and other tasks than it did before and that was on quite an old install.

The adaptor has the standard 32mb Adaptec stick in it.

Share this post


Link to post
Share on other sites

To illustrate the poor performance of Adaptec RAID adapters I will post the preliminary results of the SCSI RAID comparison I am currently working on. The benchmarks are the weighted average of six diffirent workstation benchmarks based on Business Winstone (Office, Winzip, Norton AV), Content Creation (Photoshop, Premiere, Director, Dreamweaver etc.), software installation (installing Office, WordPerfect, Photoshop and some smaller applications) and dvd ripping (stripping a dvd in IfoEdit with the original data and de stripped dvd on the same disk). The testing methodology is comparable to the methods used by Storage Review (both based on WinTrace / RankDisk trace & playback), only I use diferent workloads.

full.png

Intelligent caching algorithms can make quite a big difference on intelligent RAID adapters equipped with cache memory. This is where the Adaptec adapters really lack and MegaRAID adapters excel. Compared to the 320-2X (based on the new IOP321 XScale I/O processor with DDR memory and PCI-X support) the 2200S (IOP303) also is far behind in the areas of cache memory bandwidth and sequential transfer rates.

The 2100S and 2200S have many similarities in terms of RAID implementation (Adaptec SCSI controller, Intel 960 / IOP303 I/O processor and SDRAM memory). The 2100S uses older/slower SCSI controllers and I/O processors, has only one channel and (AFAIK) does not support 66MHz. The 2100S is legendary for its poor performance. I actually used on in the days the original Cheetah X15's were hot. With three drivers I couldn't get it to perform better than a single X15.

Does anyone why Adaptec is so popular while their performance is on a completely different level (negatively) than the competition?

Share this post


Link to post
Share on other sites
Does anyone why Adaptec is so popular while their performance is on a completely different level (negatively) than the competition?

Why are you using desktop benchmarks for RAID? IOMeter is more representative of what RAID is intended for. And do I understand that you are just at 4 spindles? This is not at all accurate or representative of the intended RAID market.

We own 3 Mylex AccelRAID and one Adaptec 3400. I bought the Adaptec last, in spite of the fact that the Mylexes tended to benchmark out a little bit better, because of real-world practical considerations:

1. First and foremost Mylex was owned by IBM at that time. Tech support was useless, there was no advanced replacment, and in fact when I spoke with the tech at the time he assured me that he had never even laid hands on one of these boards at the time. Now I know the history of Mylex, and it was not always this way, and it probably is not now, but that was bad news at the time. One of the AcceleRAIDs has never left the box - I bought it to make sure that I could keep the other two running.

2. The popularity of Adaptec SCSI chipsets, including on board mobos, and the general universal availability of interoperable products.

3. The Adaptec RAID in question has, if I have my dates correct, been running essentially unattended performing w/in spec and flawlessly serving its purpose for about 3.5 years with 9 Atlas 10Ks per channel.

RAID is first and foremost optimized for servers, and the people who buy RAIDs, RAID adapters and other hardware are first and foremost looking to insure that the stuff works. Paying the mortgage depends on it.

Share this post


Link to post
Share on other sites

Now that is a common sense, practical reality call, btb4. Rare in RAID threads I usually find.

Do well.

Jonathan Guilbault.

Share this post


Link to post
Share on other sites
Why are you using desktop benchmarks for RAID? IOMeter is more representative of what RAID is intended for. And do I understand that you are just at 4 spindles? This is not at all accurate or representative of the intended RAID market.

IronFire is using his 2100S in a desktop system so I posted the workstation results.

I have been testing server performance as well (the server averages are somewhere halfway this page: http://www.tweakers.net/plan/233 ). The server benchmarks are a combination of the IOMeter fileserver benchmark and traces of disk access on MySQL and Apache servers. IOMeter is too synthetic to be representative for real world server performance. It is nice for measuring the influence of access times and command queuing performance, but it almost completely ignores the factor of caching (because IOMeter's access patterns are too random). Caching is just as important for server usage as for desktop usage.

Four drives will represent workstation usage pretty well. It is a bit limited for server usage. Still, many dual processor servers are running small RAID 5 arrays with three or four drives, a RAID 1 boot array and a hotspare. There are also practical issues. RAID testing is very time consuming. 3 to 5 days per adapter, 15 adapters, two months of benchmarking.

Btw: the test configuration is a MSI K8D Master dual Opteron mainboard with 100MHz PCI-X and one Opteron 140. The drives are 18,4GB Maxtor Atlas 15K's.

Share this post


Link to post
Share on other sites

Thanks again for everyones informative posts, so it looks like my best option is to flog both drives and the card and buy a 74gb Raptor and a DVD writer for backup ?

Share this post


Link to post
Share on other sites

FemmeT, first of all I'd like to thank you for all the benchmarks you have taken the time to perform and for using useful ones. There is too much ATTO and HDTach and Sandra around the internet.

I would say that it may be better if you didn't weight them and included the individual test results. It would make your benchmarks a known quantity and significantly more reproduceable. This is especially important because you have combined benchmarks with significantly different access patterns into this weighted score (for example the DVD rip and business winstone are totally unrelated). I think unrelated task should be presented as such. The results would be significantly more useful. It would also save you a little time not having to do the math.

I for one would be tremendously interested in seeing the results for just Content Creation and Business Winstone individually.

FemmeT,

Caching is just as important for server usage as for desktop usage.QUOTE]

In a very different way I would argue. With a web server and a database server you try and cache things you know will be accessed (like front-page articles) in system memory. This sort of caching is not influenced by the controller at all. The accesses to disk on a well-configured server will be random and to rarely accessed elements like old mailing list archives or to huge databases that can't be kept in RAM. So in a practical sense I think you underestimate IOMeters usefulness as a Server-performance measuring tool. Random I/O is where you need the power in multiuser situations most often.

Do well.

Jonathan Guilbault.

Share this post


Link to post
Share on other sites
There are also practical issues. RAID testing is very time consuming. 3 to 5 days per adapter, 15 adapters, two months of benchmarking.

Btw: the test configuration is a MSI K8D Master dual Opteron mainboard with 100MHz PCI-X and one Opteron 140. The drives are 18,4GB Maxtor Atlas 15K's.

Well I stand by my concerns about the applicability of your testing, esp. when in a server/business market - the primary market for RAID - "performance" encompasses far more than just benchmark numbers. Still, it appears to be an ambitious project that could help to expand the topology of performance data. Good luck!

Share this post


Link to post
Share on other sites
I would say that it may be better if you didn't weight them and included the individual test results. It would make your benchmarks a known quantity and significantly more reproduceable. 

I know. The final review will include results of the individual tests and will also include ATTO and Winbench benchmarks. The review will appear on Tweakers.net (dutch hardware tech site) and will be translated to english. At this moment I am still busy benchmarking the ICP Vortex GDT8524RZ+. I hope to include the Adaptec 29320, 39230 and 2120S too. It is quite difficult to get samples of Adaptec controllers because Adaptec does not supply evaluation units to the press (LSI Logic and ICP Vortex on the other hand are very supportive).

Maybe I can post graphs of the office and workstation results later today (time is 2.00 AM here).

In a very different way I would argue. With a web server and a database server you try and cache things you know will be accessed (like front-page articles) in system memory. This sort of caching is not influenced by the controller at all. The accesses to disk on a well-configured server will be random and to rarely accessed elements like old mailing list archives or to huge databases that can't be kept in RAM.

Many servers do lots of disk writes, for example access logging on a webserver or web database server. (Write-back) cache will always help a lot in situations with high levels of concurrent reads and writes. Often the writes are very local to each other, e.g. the system is writing to a small set of log files or is writing data to certain tables that are frequently accessed.

The webserver benchmark I created with WinTrace is based on an Apache server that serves 2 - 150KB images from a 6GB data set (located accross a 36GB RAID drive) with 60.000 images. The access and error logs were located on the same drive. The record of the trace started after the filesystem cache (1,6GB on a system with 2GB RAM) was filled. The idea is to simulate a webserver with heavy disk I/O. I dit not mean to simulate the average webserver that is mainly serving dynamic content and static content from a small data set and therefore has little disk I/O.

The statistics of the trace show that the average seek distance was four to nine times higher than in the workstation traces, so the access pattern was very random. The proportion of reads and writes was approximately 25/75 in terms of transfer size and 50/50 in terms of I/O operations. The results show that performance is heavily influenced by the adapter's write-back cache. Write-back cache doubles performance on many adapters. In the IOMeter fileserver simulation the difference between write-back and write-through is smaller: 10-20 percent on an access pattern with 20 percent write I/O.

The question is if such a workload can represent overall server usage. I think the idea of highly random read I/O combined with more localized write I/O is valid for many server applications.

There is also a database benchmark included.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now