Quindor

Member
  • Content Count

    111
  • Joined

  • Last visited

Everything posted by Quindor

  1. Quindor

    Sata Raid, Tcq, Etc.

    Hmm, reading the frontpage and other pages on the web today, I see that my questions where just about a month too early. Seagate disks (which I suspected cause they have native SATA disk solutions, not using a bridge) and a promise controller sporting the Marvel chip, with the SATA-II-150 implementation. Perfect all together. Quindor
  2. Hey, I'm looking to build a new FTP server for myself based on sata tech. Right now I'm using 4WD drives and 4MAxtor drives (used 4 of each brand to test durability, WD lost about a week ago, not even a year old. :S). All drives are 120GB. Using Highpoint controllers RocketRAID 404) and promise controllers (2xUltra100tx2). Anyways. I'm looking to upgrade the whole situation. I've found a nice new motherboard from SuperMicro which uses the normal 875EP chipset (have it in my workstation, nice and fast!) but has a different southbridge, which adds PCI-X to the equasion! Sadly enough only 66Mhz, but still, 512MB/sec bandwith should be plenty. I use the FTP server for lanparty FTP serving. And it's been doing fine, the above setup yields about 20 - 40MB/sec transfer. Peak has been 1TB of traffic within 12H. I use 4 raid 0 array's of 2 disks each, for best performance. Now, anyways, I think I have most hardware sorted out, BUT the disks.... And I was hoping to find some information. I'm looking to use this stuff to start off with. Supermicro P4SCT+ Supermicro DAC-SATA-MV8 (Supermicro CSE-M35T-1) The board is the 875 with the Hance Rapid south bridge, giving me 3xPCI-X 64bit, 66Mhz. Should be plenty of bandwith. It also houses 2x serial-ATA from Intel and 4xSerial ata using a Marvell chip (88SX5040). The chip is hooked up with the PCI-X bus internally. The INTEL is using an intel bus thingie in the bridge itself, so costing NO pci bandwith, sweet. The DAC-SATA-MV8 is a PCI-X addon card. Which uses the 88SX5080 chip, giving it 8 ports. How here's the deal. These chips are capable of Tagged Command Queing, which is in in the SATA1-ATA6 protocol. Since I'm going to be servering large and small files, to sometimes 4 tot 32 clients at a time using gigabit (either CSA bus based copper (NO PCI bandwith) or Fiber). The thing is, that whith so manny clients asking so much data from the disks at the same time, it allmost turns into random reading of the data, so normall sequential disk transfer isn't really valid anymore. Ok, my only real question, cause I have the above pretty much figured out is this. Which of the disk that are available right now accept and support Serial ATA Comming Queing and actually use it WELL. Simple question eh? You'd think you would be able to find an awnser, well...no, you won't. I think that in building this new FTP system, command queing is going to do a WHOLE lot to improve my transfer rates with such traffic patterns. Sadly enough there is no reviews, no technical data, nothing really to fall back on. Striping I do using Windows 2003's dynamic disk striping. So that the controller is stupid on that part, doesn't mather. It supports the ATA queying in chip. And also controllers which have this chip (For instance RocketRaid 1820(a) from Highpoint) also support this feature. They actually gave me a vage anwser on this question that IBM/Hitachi and WD support it (WD only with new special firmware). But not which types, etc. I'm working with them to get an awnser but I was hoping I wasn't the only one on the planet with the same thoughts and ideas. And no, the intelligent cache on the RocketRaid 1820A doesn't have any use for RAID-0. It's purely there to speed up RAID-5....to my disapointment. Anyways, I'm hoping to start up some sort of discussion to find out whom does and or doesn't have it, and which disk would preform best for my situation. Hopefully this interests more people. Quindor ((((((((((((((Just some info for people looking to buy the RocketRAID 1820(a) Between RR1820 and RR1820A, there are some differences as below. 1. RR1820A is low-profile designed for rackmount solution, but RR1820 is only for standard PC system. 2. The chipset HPT601 on RR1820A supports advanced cache algorithms to optimize the XOR parity under RAID 5. 3. The new driver on RR1820A is to offer higher stability with minimized tolerance. The caching of HPT601 is to optimize the performance of data reading and writing under RAID 5. So it won't help raise the performance of RAID 0 (software striping). )))))))))))))) and they also said (((((((((((((( About command queuing, now only the HDs from WD, IBM and Hitachi support TCQ. If your HD is from WD, please be sure to contact WD to see if the faneware of the HD is of new version. We're deeply sorry that we couldn't list the HD modules that supports TCQ to you, you could get the info from those manufacturers' websites. ))))))))))))))
  3. Quindor

    Sata Raid, Tcq, Etc.

    Thank you all for your replies. I'm guessing the quinftp 2.0 project is going to be thrown into the closet for a little while I think, working on my new car and the audio setup in that is getting a bit expensive And also I do want to make sure my new version of the server has the perfect NCQ in there for performance reasons... The mobo still has a bottleneck, etc. All just too bad. But I did find some nice things, such as the supermicro hotswappable SATA HDD bay, and their 8 port controller card, which is cheap, but very good, I have a microsoft white paper proving it even (read speed of 2.4GB/s using 32drives and software stripe), etc. All in all very interesting indeed! I guess 1.1TB will just have to do for now Again, thank you all for the great discussions. Quindor
  4. Quindor

    Sata Raid, Tcq, Etc.

    Hmm, that's a good one, and I guess spending like E60 extra to get the hyper-threading and the cache and all should just be justifyable.... don't want to screw up this machine by getting a crappy proc. More confusion today, been mailing with supermicro about the board and reading the intel white paper and such. They seem to be contradictional. So, I'm a bit confused now. :S Maybe it has a 16bit intel link or something? At least the chipset details I checked on the website didn't reveal this. Anyways, still looking for awnsers on the Command Queing thing. Been looking at Silicon Image controller information and whitepapers and such, they officially support it and such, and again gave me more knowledge, but still, no clue which disks do and don't. Have sent mail to them, highpoint, maxtor and seagate and I believe WD also, to ask them.... let's see what they mail back. I'll keep you all posted. Quindor
  5. Quindor

    Sata Raid, Tcq, Etc.

    Ah, the server is for servering a LAN ftp server. Allready your pointed towards IIS cause of the low cpu utilization. Cause half of it runs in the kernel itself. But on average a client will expect around 5-8MB/sec. And the amounts of clients vary. In total now, with the old setup, I often just reach the max of the disks and pci bus and cpu...that's why I want to upgrade. On average right now, 10MB/sec ~ 15% cpu usage, and so 40MB/sec is about 60%. Not too bad, but that's for FTP. I use direct connect and such too, but if something uses much cpu it's that. Also other ftp servers (with more features) use more then IIS. I think I'll go with the celeron I think, cause I don't think it would difference much between the proc's with this kind of tasks. Also think the bandwith will be ok, cause I still think the disks will cause the bottleneck, and not the busses. Allthough, say I have an ideal situation of 8 clients, accessing the 8 disks seperatly with large files, I would do 80MB/sec. That'd be cool! And the busses also internally should be able to handle that, even when using a PCI based Fiber card for network. Should be nice! Anyone know if I am forgetting or overseeying something? Quindor
  6. Quindor

    Sata Raid, Tcq, Etc.

    Does anyone know for building an FTP server if a Celeron 2.8 will differ much from a Nortwood PIV or a Prescot PIV? Celeron has the least cache, then the northwood and then the Prescot with 1MB... In my reasoning the cache shouldn't mather much cause disk data access isn't cacheble in such way? So performance should be about the same with all of them?.... or not? Quindor
  7. Quindor

    Sata Raid, Tcq, Etc.

    A little reply to myself (and anyone whom is reading ). I just browsed through the intel documentation of the HUB architecture and specifically this chipset...and indeed, as I quoute from the manual. 8-Bit Hub Interface — 266 Mbyte/s maximum throughput — Parallel Termination scheme for longer trace lengths — Supports Lower Voltages as per Hub Interface 1.5 spec Which means it is indeed only 266Mbyte/sec. Now, I'll count of a 16MB/sec for protocol traffic and like PCI ati rage videocard and such. That leaves me with 250MB/sec bandwith. That should in theory be enough to fill a whole Gigabit card that is connected through the PCI-X interface (I mainly use FIBER, not copper). The path to my knowledge would be, PCI-X disk controller, southbridge --> northbridge over the link, (beying requests from the Gbit card, let's say 100MB/sec), then, into the memory and proc to do whatever it wishes to do, then back from northbridge-->southbridge (again a 100MB/sec) to the Fiber Nic. This would still leave me with 50MB/sec to spare if I am correct? I will admit I do not have much knowlege of the intel hub architecture.... is this correct? Seeying as that this will be an FTP server, and will be using SATA disks, I do not think I'll EVER come NEAR 100MB/sec, right now my server hovers around 20-30MB/sec mainly, and 50MB/sec in testing, saturating the PCI bus, but seeying as above, that should not be possible in a situation where using network connections as the data source or target? This sure sucks though. I hope people can discuss my findings or maybe somebody knows if I am correct or not. Quindor p.s. It looks like the onboard Gbit CSA interface is directly linked into the Southbridge, thus not needing to go back over the Hub interface? I guess that would be preferable then for the MAX performance?
  8. Quindor

    Sata Raid, Tcq, Etc.

    hey, thnx's for the replies. Hmm, I downloaded the manuel from supermicro for the motherboard, and it shows the drawing with the hub in place, but not the bandwith constraints with them. Since it is using the Hence Rapids SouthBridge, I could image the bandwith beying larger, but I'll have to look this up on the intel site. Ah, I did much much much testing when building the FTP. I had promise raid controllers, highpoint raid controllers, different bios versions of the highpoint (Rocketraid 404) and windows 2003 striping. Conclusion was that the Windows 2003 striping of 2 disks, yielded the best results in a simulated load enviroment. (4 clients reading from the testings disk/array). With 2 disks in raid 0 I saw performance gains, but with more then that, it was the same or even less. (considering more I/O is neccessary and logical conclusion for me). I think that maybe with cache of the disks working in conjunction during Raid 0, it is able to handle the requests of the file better then one disk can.... or at least that was my conclusion. But I certainly have tested it VERY well.... cause I wished to build an optimal performing system. I understand the principal of Raid 0, and that if you read from all disks at the same time, you can add up the speeds sequentually, but when you approach random reads, all the heads still need to find the file at the same time, so in theory worst case, reducing speed to that of one disk, or slower even cause of the increased bus and I/O traffic handeling. Anyways, as said, after testing, conclusion was 4 raid 0 arrays of 2 disks each on the same controller. (tested with cross controller array's and such too, even with cross brand array's since I have 8 times 120, 4xWD and 4xMaxtor). Also this was more conveniant with drive letters and disk space size. Although this can be fixed with NTFS mount points too, which I think I will use on the new server. I've had limited time today sadly enough, but I'm confused now between Command Queing, Tagged Command Queing and Native Command Queing. I understand that on SATA the command depth is 32 commands. And I know the marvel controller supports this.... reading the motherboard manuel is even specifically stated that it does that. Confirming what I thought allready. The thing is, Hitachi/IBM drives, use the 88i8030 Serial SATA Bridge chip, thus not beying SATA native. And then offcourse, Seagate beying the ONLY native SATA drive and having written this http://www.seagate.com/cda/newsinfo/newsro...824%5E2,00.html in which they mention "NCQ is a feature that can only be implemented on native Serial ATA hard drives like Seagate's". And now I don't mind buying seagate drives (Allthough hitachi I might be able to make a deal with), they only go up to 200GB and I wish 250GB minimum (want to reach a 2TB array with 8 disks max). This whole NCQ/TCQ/CQ business sure is confusing, and there doesn't seem to be one straight awnser..... I'm 99% sure the controller can do it now, especially after reading the supermicro doc's which states literally "Up to 32 Outstanding commands". Anyways, hoping still to discuss and for people with some insight into this. Quindor
  9. Quindor

    Sata Raid, Tcq, Etc.

    Ok, something went wrong with posting that and I don't seem able to edit it? :S Please remove above post admins? --- Hey, I'm looking to build a new FTP server for myself based on sata tech. Right now I'm using 4WD drives and 4MAxtor drives (used 4 of each brand to test durability, WD lost about a week ago, not even a year old. :S). All drives are 120GB. Using Highpoint controllers RocketRAID 404) and promise controllers (2xUltra100tx2). Anyways. I'm looking to upgrade the whole situation. I've found a nice new motherboard from SuperMicro which uses the normal 875EP chipset (have it in my workstation, nice and fast!) but has a different southbridge, which adds PCI-X to the equasion! Sadly enough only 66Mhz, but still, 512MB/sec bandwith should be plenty. I use the FTP server for lanparty FTP serving. And it's been doing fine, the above setup yields about 20 - 40MB/sec transfer. Peak has been 1TB of traffic within 12H. I use 4 raid 0 array's of 2 disks each, for best performance. Now, anyways, I think I have most hardware sorted out, BUT the disks.... And I was hoping to find some information. I'm looking to use this stuff to start off with. Supermicro P4SCT+ Supermicro DAC-SATA-MV8 (Supermicro CSE-M35T-1) The board is the 875 with the Hance Rapid south bridge, giving me 3xPCI-X 64bit, 66Mhz. Should be plenty of bandwith. It also houses 2x serial-ATA from Intel and 4xSerial ata using a Marvell chip (88SX5040). The chip is hooked up with the PCI-X bus internally. The INTEL is using an intel bus thingie in the bridge itself, so costing NO pci bandwith, sweet. The DAC-SATA-MV8 is a PCI-X addon card. Which uses the 88SX5080 chip, giving it 8 ports. How here's the deal. These chips are capable of Tagged Command Queing, which is in in the SATA1-ATA6 protocol. Since I'm going to be servering large and small files, to sometimes 4 tot 32 clients at a time using gigabit (either CSA bus based copper (NO PCI bandwith) or Fiber). The thing is, that whith so manny clients asking so much data from the disks at the same time, it allmost turns into random reading of the data, so normall sequential disk transfer isn't really valid anymore. Ok, my only real question, cause I have the above pretty much figured out is this. Which of the disk that are available right now accept and support Serial ATA Comming Queing and actually use it WELL. Simple question eh? You'd think you would be able to find an awnser, well...no, you won't. I think that in building this new FTP system, command queing is going to do a WHOLE lot to improve my transfer rates with such traffic patterns. Sadly enough there is no reviews, no technical data, nothing really to fall back on. Striping I do using Windows 2003's dynamic disk striping. So that the controller is stupid on that part, doesn't mather. It supports the ATA queying in chip. And also controllers which have this chip (For instance RocketRaid 1820(a) from Highpoint) also support this feature. They actually gave me a vage anwser on this question that IBM/Hitachi and WD support it (WD only with new special firmware). But not which types, etc. I'm working with them to get an awnser but I was hoping I wasn't the only one on the planet with the same thoughts and ideas. And no, the intelligent cache on the RocketRaid 1820A doesn't have any use for RAID-0. It's purely there to speed up RAID-5....to my disapointment. Anyways, I'm hoping to start up some sort of discussion to find out whom does and or doesn't have it, and which disk would preform best for my situation. Hopefully this interests more people. Quindor ((((((((((((((Just some info for people looking to buy the RocketRAID 1820(a) Between RR1820 and RR1820A, there are some differences as below. 1. RR1820A is low-profile designed for rackmount solution, but RR1820 is only for standard PC system. 2. The chipset HPT601 on RR1820A supports advanced cache algorithms to optimize the XOR parity under RAID 5. 3. The new driver on RR1820A is to offer higher stability with minimized tolerance. The caching of HPT601 is to optimize the performance of data reading and writing under RAID 5. So it won't help raise the performance of RAID 0 (software striping). )))))))))))))) and they also said (((((((((((((( About command queuing, now only the HDs from WD, IBM and Hitachi support TCQ. If your HD is from WD, please be sure to contact WD to see if the faneware of the HD is of new version. We're deeply sorry that we couldn't list the HD modules that supports TCQ to you, you could get the info from those manufacturers' websites. ))))))))))))))
  10. Ok, my setup. Asus A7V266-E (raid + onboard sound) 1.8Ghz Athlon XP 256Mb Crucial DDR Ram GeForce2 GTS 32MB etc.etc.etc. Anyways, I'm running windows XP Prof. I recently bought 2 D740x-6L 40GB maxtor disks. Both at the same supplier, etc. and they seem to be the exact same model as far as I can see, which is good offcourse. I've been experimenting with raid 0 and striping before. And in my Lan party FTP server I have 3 maxtor's also (diamond max plus 40 and plus 60's of 40GB) combined with software stiping of windows 2000 server and it does a nice 90MB/sec down to 70MB/sec at the end. I use three different IDE controllers and they are single primary disks on the primary channels. Which accomplishes this. Anyways, works great, that isn't the problem. Hooking the same to a promise fastrack 100 gave horrible results, 40 to 45 MB/sec at best. Anyways, it's not about my server, that one works fine. It's about my desktop. I bought the 2 D740x disks to raid to a 80gb partition on the promise. Well, first try. Both disks as masters on the seperate channels. Ok, that just was a bit no no (at that time I had my operating system on ANOTHER d740x of 60gb on the motherboard controller, so no problems, test mode!). It hung and it shook and anything I tried to write to it, would make the system stop responding. I tried the .14 drivers the .18 drivers and the .24 drivers. The .14 seemed to work best, but intermittand hangs, etc. Quite bad indeed. Then I tried both disks on the same channel. Not the best you can imagine, but wonder above wonder, it actually worked. No more hickups, etc. I did hdtach, transfer rate was QUITE low and attobench confirmed this. But as probably anyone can confirm, this setup is unwished unless not otherwise possible. Nonetheless though, it worked and didn't hang up stuff. The .24 drivers seemed to work fastest. Next I tried, first primary master and the other secondary salve. This worked, but now soemtimes it would intermitendly hang. Weird, but it worked better then two masters for some weird reason? So, what the heck. I put both disks on slave and put them on the seperate channels. WTF? Succes? No more hangs, access time hovers around 11 or 12MS (accoutics still on). And atto brings back value's of around 60mb/sec reads, and on the single disk it's only around 40. Not a huge improvement, but nonetheless, better. Sandra (puke) also rates it higher then a single disk. And WinMark STR graph hovers through the graph between 55 and 70. I know that this is probably the highest I'll ever reach cause VIA happy fellowerd the PCI connection control links. And this is with all the patches and latest drivers applied, sadly enough. Higher it won't go probably. But what is weird, is that the maxtor's work perfectly fine as two slaves? It's fine by me, but weird indeed. This is using the .24 drivers btw. Bios version I don't know, it comes with the 1007 bios of the motherboard and supports the new 48-bit LBA addressing. I was just wondering if we could start a nice discussion topic on this and finally figure out what DOES work and what DOESN'T work with maxtor and promise disks. I feel that the disks could deliver more if my PCI bus wasn't broken like it is. Maybe someone with an intel board can conform this? My FTP does 90mb/sec with easy, it has an 820 or 810 don't remember (PII-400 with 512MB sdram (only intel that eats a single 512mb stick). Anyways, enough about my FTP, it only does prove that when correctly configured, maxtor disks can have a high sustained STR. Anyways, can anyone conclude or test my findings and see if the 2 slave thing really mathers THAT much? I'd love to hear about more stories. Kind regards, Quindor
  11. Quindor

    D740x + Promise Frasttrack 100 Lite

    I've also applied the George E. Breese PCI Latency Patch. That raised my bursts from 50/60mb/sec to a solid 80mb/sec! Anyways. Look at the benchmarks. I also switched to the newest promise .24 beta driver for the fasttrack 100 Lite. With the newest bios 1007 for the Asus A7V266-E. It supports the 48-bit addressing now. Anyways, the pictures! Hope you like it, please post some replies. I've seen manny manny people with problems with the same setups. Quindor