MisterDuck

Cheap Gigabit Switch?

Recommended Posts

i dont know for sure, pbut would think not. Packets are still 1500 bytes, they are just packaged into larger frames.

          Frame
  |        |       |
Packet   Packet  Packet



becomes(simplified)


         Frame
  |        |       
Packet   Packet

Packets are usually limited to 1500 (your MTU setting) on ethernet. As prev. stated, frames vary in size by type... large, jumbo....

Share this post


Link to post
Share on other sites

or more correctly

      Frame                  Frame
  |        |                  |
Packet   Packet            Packet

Share this post


Link to post
Share on other sites

About jumbo frames, is that even an IEEE standard yet? You'll not only need support from your switch, but also your NICs and NIC drivers. You have to ask yourself if the gain is worth the huge price differential.

And, as I've said, I'm getting 61 MBytes/sec on my Linux LAN through my cheapo gigaswitch with cheapo built-in gigabit LAN ports on the motherboards. For the price, I'm very pleased. At this speed, my "bottleneck" has shifted from LAN to hard disk writing speed.

Share this post


Link to post
Share on other sites
and still mix up the GHz/Gbps

cat5e is typically 350mhz. go look it up :)

You got it slightly wrong...

CAT5 and CAT5e are both 100MHz.

(Specification EIA-568)

What "e" brings in is a preciser specification

for some parametrics concerning "crosstalk"

and "delay skew".

These parametrics were not so important as

the bandwidth being high versus bitrate.

But as the bitrate surpassed the bandwidth

the signal were not only decreasing in strength

over distance but also deteriorating badly.

CrossTalk:

The main component of crosstalk is per

induction (and not capacitive coupling) which

is reduced by twisting the pair of wires.

The amount of crosstalk is inversely related to

how perfect this twist is done.

And the less crosstalk the better signal / noise

ratio you get.

DelaySkew:

There would just be no problem if the carriage

frequency just kept 2GHz (twice the bitrate) but

the datamodulation makes this spectrum far from

dirac.

And as most of us know, filtering delays the

frequency spectrum in a precise but variable way.

(bessel would not be as bad - but there ain't no

passive bessels anywhere).

To put it straight: A good cabledesign can minimize skew.

Note: CAT6 is 250MHz.

/casa

Share this post


Link to post
Share on other sites

At http://www.cisco.com/en/US/products/hw/swi...008007fb06.html I read:

When you enable the jumbo frame feature on a port, the port can switch large (or jumbo) frames. This feature is useful in optimizing server-to-server performance. The default maximum transmission unit (MTU) frame size is 1548 bytes for all Ethernet ports. By enabling the jumbo frame feature on a port, the MTU size is increased to 9216 bytes.

So MTU does increase. About jumbo frames it also said "This feature is useful in optimizing server-to-server performance."

Share this post


Link to post
Share on other sites

I would still question the cables, although 20-25m on regular CAT5 *may* work.

GigE needs 250MHz to run properly in 100m runs.

With 100MHz cable, you're only giving GbE 40% of the necessary bandwidth. 20m seems optimistic.

I haven't tried running GbE over CAT so I cannot *tell* you whether it will work at not at true GbE speeds.

The confusion about CAT5e is that the spec states 100MHz spectral bandwidth. However most manufacturers produce 350MHz rated cable - far in excess of the spec. This is mainly do to UTP GbE.

CAT6 is spec'd at 250MHz but you should be able to find 500MHz rated cable.

At any rate, I'd be interested in the results. :D

DogEared

8^)

Share this post


Link to post
Share on other sites

350MHz...

Then why not mark them as CAT6?

Or is the CAT6 specification not met?

Whatever...

Don't forget you'll never know whether you

got a less perfect cabling if you don't bring

in one of those nifty HP analyzers.

If you got nodes capable of delivering full

1000Mbps back to back you may without ; )

Note: What happens when you've got a bad cable?

- decreased thruput!

/casa

Share this post


Link to post
Share on other sites

...seems perhaps the best thing to do for me would be to buy a cheap four port gigabit switch and play around on my network to see if I can get respectable speeds. I know the cable I put in was high quality at the time (shielded, supposedly twisted pretty well, etc.) but the proof is obviously in the pudding...I have one gigabit NIC in my work laptop, and I think I'll buy one more--any reccomendations as to specific models for testing out my network?

Sorry to milk everyone for information, but this is good stuff. Much thanks.

Share this post


Link to post
Share on other sites

No, the cables aren't cross-over and they're already in the walls, so just doing a straight connection isn't going to work unless I jerry-rig something....I'd think a cheap router (I need a new router anyways) would be best.

Share this post


Link to post
Share on other sites
You don't need crossover cables with GE. Just use an ordinary straight cable and it should work.

what he said. most gige nics have the ability to 'cross the wires' themselves.

Share this post


Link to post
Share on other sites

My test with my onboard gigabit and a X-over cable.

Jumbo packets off file from D:\game1 --> D:\game2

38MB/s 65% CPU

Jumbo packets on file from D:\game1 ---> D:\game2

38MB/s 40% CPU

Jumbo packets off files from D:\game1 -> D:\game2 & C:\game1 -> C:\game2

50MB/s 99% CPU

Jumbo packets on files from D:\game1 -> D:\game2 & C:\game1 -> C:\game2

70MB/s 65% CPU

38MB/s is the max write speed these WD can handle. That is a bottle neck that the second drive over comes.

70MB/s is a PCI bottle neck that cannot be overcome.

50MB/s is a bottle neck that can be overcome with Jumbo packets.

That is enough for me! If the gigabit switch doesn't support Jumbo fames, I would not buy it. I think I will have to buy myself a gigabit switch now.... :blink: 70MB/s is pretty damn fast. It would not take much time to move a terabyte or so. B)

Gameboxes 2 of them

GA-7N400 Pro2 F5

2500+ @ 3200+ (200x11 Default Vcore)

HyperX PC3500 512Gig (1x512) CL2,2,2,6 2.5V

MSI GF4 4200 64Meg (266/533)

SB Audigy

WD1000BB DRIVE 0 MASTER

WD1000BB DRIVE 0 SLAVE

Liteon 48x12x48 DRIVE 1 Master

Antec 400watt

Onboard GE

Share this post


Link to post
Share on other sites
GigE needs 250MHz to run properly in 100m runs.

With 100MHz cable, you're only giving GbE 40% of the necessary bandwidth.  20m seems optimistic.

Actually, 20m seems pessimistic to me, under those conditions. If you can do 100m with 250MHz-rated cable, then I'd think you'd be looking at more like 40m with 100MHz cable...

-- Rick

Share this post


Link to post
Share on other sites
GigE needs 250MHz to run properly in 100m runs.

With 100MHz cable, you're only giving GbE 40% of the necessary bandwidth.  20m seems optimistic.

Actually, 20m seems pessimistic to me, under those conditions. If you can do 100m with 250MHz-rated cable, then I'd think you'd be looking at more like 40m with 100MHz cable...

-- Rick

Since I'm the one who said 20m on cat5e with GigE... I'll clarify where this came from.

This figure was the maximum distance we could get between a Cisco 2950T-24 and a server with Intel PRO 1000MT adapter, at work. Testing with a Broadcom adpater (from a Dell 2650), we only got 19m... with cat5e.

This is from experience, not some scientific maths. ;)

Share this post


Link to post
Share on other sites

Now I will try the same test with the old Cat5 I have run thoughout my house. I went into my comm closet and put a X-over cable into the patch panel so my two game boxes were directly connected. The difference this time is they may only be 2 foot apart, but the patch panel is about 50' away. The total length then is two 6' patch cord, one 6' X-over, and two 50' cat5 runs to the comm closet. This totals are 118' or 36 meters. Now the results were close to the original 6' X-over cable, but the measurement were not as flat. There were times where the transfers would dip down 10-20MB/s (kinda choppy). The number that I present are the averages using WinXP's performance monitors. I used hard drive "bytes written per sec". I used the taskman for CPU usage and just used a ruff guess. The files were 10-700MB/s movies.

Jumbo packets off file from D:\game1 --> D:\game2

38MB/s 65% CPU recieving on game2 25% CPU sending on game1

Jumbo packets on file from D:\game1 ---> D:\game2

38MB/s 40% CPU recieving on game2 20% CPU sending on game1

Jumbo packets off files from D:\game1 -> D:\game2 & C:\game1 -> C:\game2

45MB/s 99% CPU recieving on game2 45% CPU sending on game1

Jumbo packets on files from D:\game1 -> D:\game2 & C:\game1 -> C:\game2

66MB/s 75% CPU recieving on game2 35% CPU sending on game1

Conclusion: Use good cabling if you plan to go long distances. I will probably get X-over results when I get a Gigabit switch because my cable lengths will be cut in half. Though if I don't get X-over results I might just have to re-run my Cat5 and preplace it all with Cat5e cable and ends.

Now I might just have to make myself a 100' X-over cable and add that to the equation and see how good everything works at 212'. Not right now, I'm going to bed it is late. :blink:

Share this post


Link to post
Share on other sites
Jumbo packets off file from D:\game1 --> D:\game2

38MB/s 65% CPU recieving on game2  25% CPU sending on game1

Jumbo packets on file from D:\game1 ---> D:\game2

38MB/s 40% CPU recieving on game2  20% CPU sending on game1

Jumbo packets off files from D:\game1 -> D:\game2 & C:\game1 -> C:\game2

45MB/s 99% CPU recieving on game2  45% CPU sending on game1

Jumbo packets on files from D:\game1 -> D:\game2 & C:\game1 -> C:\game2

66MB/s 75% CPU recieving on game2  35% CPU sending on game1

Sounds reasonable.

All the added resends @ ethernet level won't

affect the "HDD write" nor the "PCI capacity"

limits.

So the first two test setups the remaining capacity

on the cable was enough to keep the pace.

However on the remaining the total capacity

was too low and therefore decreased thruput.

Tnx for your real world tests : )

/casa

Share this post


Link to post
Share on other sites
However on the remaining the total capacity

was too low and therefore decreased thruput.

By remaining I was refering to the other two tests...

/casa

Share this post


Link to post
Share on other sites
As other posters have detailed, a switch with jumbo frame and VLAN support is essential to avoid killing the gig-e wired hosts with interrupt traffic.

Does the switch fragment the packets when they're send from a jumbo-frame enabled VLAN to a jumbo-frame disabled VLAN?

Any router between the two vlans will do so as the MTU is different each side.

greg

Share this post


Link to post
Share on other sites
70MB/s is a PCI bottle neck that cannot be overcome.

That is enough for me!  If the gigabit switch doesn't support Jumbo fames, I would not buy it.  I think I will have to buy myself a gigabit switch now.... :blink:  70MB/s is pretty damn fast.  It would not take much time to move a terabyte or so. B)

Onboard GE

Your performance over PCI sould be slightly better than that, it may be worth looking at PCI latency settings and the driver features of your gig-e nic.

It may support checksum offloading and other load reducing facilities which wont be enabled by default.

Current high end intel chipped and soon just about all motherboards will have gig-e hung directly off of the southbridge rather than PCI, this will bring usable full duplex gig-e within reach of everyone.

As I said before. ebay is the place to look for switches with the right gig-e feature sets. Criscos will alway attract a premium, however there are other makes which have all the facilities to handle gig-e properly and will go for a bargain.

greg

Share this post


Link to post
Share on other sites
A. Jumbo packets off file from D:\game1 --> D:\game2

38MB/s 65% CPU recieving on game2  25% CPU sending on game1

B. Jumbo packets on file from D:\game1 ---> D:\game2

38MB/s 40% CPU recieving on game2  20% CPU sending on game1

C. Jumbo packets off files from D:\game1 -> D:\game2 & C:\game1 -> C:\game2

45MB/s 99% CPU recieving on game2  45% CPU sending on game1

D. Jumbo packets on files from D:\game1 -> D:\game2 & C:\game1 -> C:\game2

66MB/s 75% CPU recieving on game2  35% CPU sending on game1

What is the difference between A and B and between C and D?

Share this post


Link to post
Share on other sites
At http://www.cisco.com/en/US/products/hw/swi...008007fb06.html I read:
When you enable the jumbo frame feature on a port, the port can switch large (or jumbo) frames. This feature is useful in optimizing server-to-server performance. The default maximum transmission unit (MTU) frame size is 1548 bytes for all Ethernet ports. By enabling the jumbo frame feature on a port, the MTU size is increased to 9216 bytes.

So MTU does increase. About jumbo frames it also said "This feature is useful in optimizing server-to-server performance."

Thats just a suggested use. Jumbo frames are useful whenever you need to transfer large amounts of data from host to host. Not just server to server. For example, one use is if you have an NFS server and have filesystems mounted on a workstation.

Joo

Share this post


Link to post
Share on other sites

70MB/s is a PCI bottle neck that cannot be overcome.

That is enough for me!  If the gigabit switch doesn't support Jumbo fames, I would not buy it.  I think I will have to buy myself a gigabit switch now.... :blink:  70MB/s is pretty damn fast.  It would not take much time to move a terabyte or so. B)

Onboard GE

Your performance over PCI sould be slightly better than that, it may be worth looking at PCI latency settings and the driver features of your gig-e nic.

I'm not sure if 70MB/s is a PCI limit or HD limit, but if the the HDs are writing at 70MB/s and the gigabit nic is recieving at 70MB/s, isn't that 140MB/S total though the PCI bus?

133MB/s is suppose to be the PCI limit of A 32bit/33mhz PCI bus, but since my nics is built onto the mobo it may have been connect to a second PCI bus. THAT IS MY GUESS :huh: I may have to do this test over with my server, it has an Intel 1000MT nic in a PCI slot. At the last lanparty my server was able to send 45MB/s to leechers and I was figuring that was a total of 45MB/s HD + 45MB/s nic = 90MB/s total PCI <_<

That is much closer to the actuall PCI bus limit.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now