Several things are probably causing your "strange" results. In my eyes, they are absolutely correct.
First, testing network bandwith, use something like netio or iperf/jperf. That tests network bandwith, not other devices.
For testing transfer rate, using CIFS is not the right way to go. FTP has less limits.
If you are testing real world application, using SMB1 (XP/2003) 65MB/Sec is about max you will achieve with a single stream. Even using 8xSSD in RAID0, this will not change, this is a protocol limitation. If using Windows 2008/Windows Vista, this should be a bit better, say 80MB/sec.
When starting to use Windows 2008 R2 x64 and Windows 7 clients, you will see 100MB/sec+ transfer rates over SMB2. There is no way to achieve this using 2003/XP level software. The protocol is just too old and too inefficient.
A single 1TB disk should, at the beginning of the disk be able to deliver about 100MB/sec sustained transfer rate. Towards the end of the disk this will be around 65MB/sec. This has to do with how the mechanical disks are constructed.
So, using your current OS software, using CIFS/SMB you will never get higher transfer rates. This is protocol bound. Multiple sessions should give you a little bit higher performance, but not by much. I would recommend Windows 2008 R2 x64 and Windows 7 Pro x64. I use both (Win2k8 R2 x64 running VMware ESX 4.0) and I get about 110MB/sec easily reading files using the windows 7 client.
Of course, it all depends on what you want to test. If you wish to test theoretical network bandwith, bottlenecks, etc. use some of the tools noted above. If you wish to test pure data transfer rate, use microsoft IIS FTP and a good FTP client. If your goal is file sharing. Upgrade your OS'ses.
Last thing. How do you measure? Best is to use performance monitor, but something like netmeter (free, get the beta) or dumeter are very handy while doing testing.
Hopefully this explains the problems a bit. If you have any more questions, or can tell us what you are looking for, let us know!