qzm

Member
  • Content Count

    12
  • Joined

  • Last visited

Community Reputation

0 Neutral

About qzm

  • Rank
    Member
  1. Interesting, I have nearly the same setup, except that I am using raid6 with 8 320Gb 7200.10 drives, and getting around 140-160MB/sec with different benchmarks under fedora core 5. My only gripe with the card at present is that the ONLY method of getting drive-fault outputs is via the SMBus connection, as it does not have individual drive fault LEDs, but it does have activity LED outputs which are IMHO MUCH less important. Oh, and the other gripe is that the onboard audio fault indication does not do a thing when I pull a drive from the swap cages - as I have yet to have a drive fault, and you cannot fault a drive in the software it is hard to tell if it will ever actually work. Operationally however I cannot complain about the card - works well. Basically seems operationally good but not QUITE setup for real raid server use.
  2. qzm

    Latest system benchmarks

    Really? perhaps you should stop wasting your time, and go and look at some Spec results instead. If you are actually using Sandra MFLOPS for this, then you seriously misunderstand floating point performance, and if you are also doing heavy CFD, etc then you have a problem.. I would suggest learning a little more before spending money. SpecFP is not perfect, but it is several orders of magnitude better than Sandra, as it uses real world scientific apps with real world compilers in known configurations. Best if all, you can access all the specific test results, choose the code that is closest to your own codes, and look at those figures.
  3. Could I possibly ask you - does the ex8350 manual give any information on those pins? From my nuderstanding it uses a serial bus (SMBUS) to send commands to change LED status, rather than a set of pines, one for each LED. If it is the second it is normally easy to setup, but it is it a serial bus, it needs a correct intelligent controller no the far end.. Thank you for your help.
  4. Greetings all. I am considering using a promise EX8350 in a 3u raid enclosure, and am wondering if it supports status LED output, and in what form. If someone more familiar with the card could let me know it would be great. The case I am considering is the chenbro RM312, which seems nice for the price. http://www.chenbro.com/corporatesite/produ...es.php?serno=33 It claims 'promise compatible' but i trust that about as far as I can throw it. The main 'feature' I require is that the card will correctly ID a failed drive to allow hotswap. I am looking at a 5+1 raid6 setup (1 hot spare), which will leave plenty of space in the enclosure, but hey that should help cooling. Any help/opinions would be greatly appreciated. Regards, Stuart.
  5. Can you fit $400 more memory in the system? (main memory, that is, not controller cache), that may make an even larger difference, depending on the load profiles. raid controller memory is MAINLY useful for decoupling writes, so if your database is not under a high write load it will make next to no difference, however if you have a high write load (and battery backup on the controller..) it can make a big difference. For high read loads I have always found system memory to be more benificial.
  6. I find it a little unfortunate that no testing is done on more 'typical' machines, primarily for the office use type measurements. I think everyone would be very surprised to find someone running office apps on a machine like this. A much more typical 'fast' office machine would be at best perhaps a 955x motherboard running the standard ATA/sATA interfaces, and since PIIX controllers are what control a majority of all the worlds IDE drives it would have been nice to see figures on them. The degree of change in some of these drives is to say the least interesting, perhaps indicating that the particular choice of hardware is having a notable effect on the outcomes, therefore indicating that some thought should be put into what type of hardware is used to test which benchmark.
  7. Actually, it depends, for 100% of game players you would be right. For those of us who use such cards for video rendering acceleration, and a couple of other non-typical uses, this would be a very nice option!
  8. qzm

    New SSD disk from Gigabyte

    Unfortunately this is not really true. Windows has absolutely terrible metadata cacheing (actually it was not too bad in some situations under win2k due to a bug, which they fixed in xp therefore what you said above is only true if files are not being created/moved/renamed/deleted, and as long as their size does not grow, in any of these situations windows enforces a full blocking metadata flush. The IO handling in windows is much like that of VMS, and about as modern. Just about any *nix variant gives you much better control of such things, and can perform MUCH MUCH faster. of course, if you have continuous virus scanning enabled under windows, it gets much much worse. I laugh very loudly every time I find someone with an expensive raid-0 setup, and they virus scanner scanning all written files. You completely fail to understand the subtle needs of high disk performance in your comments above, photoshops scratch file (which it specifically avoids growing when it can) are not even slightly similar to decompressing files, or most database activity.
  9. qzm

    Linux 32/64-Bit Benchmarks

    The compiler concerned (gcc) is currently quite well tuned for AMDs x86-86 implementation, and has no support for Intels. The secret is in the instruction timings and interdependencies, which are quite different between the implementations. As soon as Intels instruction timings are added (and supported through an architecture switch, as is normal) the Intel numbers will look a lot better. This is all pretty much normal for new implementations, and shows the lack of knowledge of the benchmarker that they did not mention it.
  10. Yes, but my point is that it is rather underhanded to reduce the warrantee period of your own hardware when shipping it in one of your own higher-end systems, especially when this increases the price of the parts significantly anyway. In other words, buy the server with 9 drives, $x, 3 year drive warrantee. buy the server with NO drives, then add the same drives, <$x, 5 year drive warrantee. You save money, and increase warrantee period. It's just insulting. And you have to look through the very very fine print to find this out. It even took IBM about 2 days to come to this conclusion themselves. IBM Also shipped, in these same servers, as an alternative, Seagate equivalent drives, which DO keep their 5 year warrantee in this situation (or so Seagate told me). It was not a selectable option, we just happened to end up with the IBM drives.
  11. I had a situation just a short while ago where *5* IBM drives from an IBM server (containing 9 drives) failed within a couple of months. Now, they are all 15kRPM 18GB drives, and all models which have carried a 5 year warrantee since they were first produced. IBM (Well, Hitachi now) refuses to honour the warrantee because they were shipped in an OEM system (even though it is an IBM system) since they are covered by the system warrantee, which is only 3 years, and ran out months before 1/2 the drives in the system failed. They claim that this system warrantee invalidates the drive warrantee. Got to love that support. This was NOT a cheap system, either. The bottom line, buy an IBM server, then buy retail drives from somewhere else to populate it, then you get a warrantee, pay MORE for the drives direct from IBM, and you get what they seem to think you deserve.
  12. qzm

    Seagate Lied about warranty?

    I had a situation just a short while ago where *5* IBM drives from an IBM server (containing 9 drives) failed within a couple of months. Now, they are all 15kRPM 18GB drives, and all models which have carried a 5 year warrantee since they were first produced. IBM (Well, Hitachi now) refuses to honour the warrantee because they were shipped in an OEM system (even though it is an IBM system) since they are covered by the system warrantee, which is only 3 years, and ran out months before 1/2 the drives in the system failed. They claim that this system warrantee invalidates the drive warrantee. Got to love that support. This is NOT a cheap system, either.