It's kinda odd. I have an almost completely different setup but my transfer graphs from me RAID controller (Elite 1600) from my drive (Atlas 15K II) look nearly identical to yours. My setup is single Celeron 2.66Ghz CPU on a Asus P5RD1-V board with everything onboard and a single card in the system on the PCI bus (LSI MegaRAID Elite 1600). That only has a 3 end cable hooked up with one end to the card, one to the single drive configured as RAID0 and an active terminator at the end of the bus on the final connector. No matter what I have done, I cannot get a higher throughput than the same 37mb/sec or so that you show on your graphs and mine look the same. I'm interested in a solution too if you find one. Also, the drive hits 98/sec on my home system, but the controller, cable and terminator are all different.
Aero, while your points are valid about PCI bus speed, that doesn't account for why he is only seeing 1/3 to 1/4 the possible speed of the bus with a set of drives that should easily hit 100mb/sec on the bus (I have a RAID0 of MAS 36's on a 21320-IS at home that can hit 96mb/sec sustained over the entire usable space on a max'd out PCI bus where every slot on the board is filled with various things from two different network cards to a secondary video card). Also, his controller may be old and have a slower processor, but how much processing does RAID 0 really require of your card onboard CPU? Yes, I have seen adapter problems cause such issues, but in my case, I have a 68-pin drive plugged into a 68-pin cable and still see an identical result. Also, the poster says that 5 out of the 6 drives use these convertors and if they are the problem, that would explain it for those drives, but not for all the drives.