Cool - the netperf database at www.netperf.org would love to see some
numbers. Don't forget to measure TCP_RR too :)
> A couple of questions about how netperf bases it`s throughput #'s (Mbps).
>
> Assume that for these questions / statements, that I am only using a
> cross-over cable between 2 NIC's on 2 separate systems.
>
> - Is the throughput (Mbps) # based strictly on Unidirectional writing
> of data and timing it's completion, using some statistical passing
> of results... between client and server ?? Would this be considered
> half-duplex ?? In either case, if I run netperf on both systems
> simultaniously (reporting 190 Mbps on both systems, from each netperf),
> does this mean that I add up the 2 system throughput #'s to get the
> 'FUll DUPLEX' throughput ?? (380 Mbps ??), or does Netperf take into
> account and use the timing of both the sending and receiving sockets,
> based on the total # of bytes transmitted and received (handshaking)
> even in a unidirectional transfer ??
Netperf takes a timestamp at the beginning of the test, counts how many
bytes were sent across the wire (it includes the time to exchange
shutdowns/FINs between the ends to make sure all the data got across)
takes a timestamp at the end, and divides.
Each netperf is an independent entity. So, if netperf A sys 100, and
netperf B says 100, and A and B ran at the same time with minimal skew,
you can add A and B and say that throughput C was 200.
It is important to remember that there is no explicit synchronization of
netperf revision two tests. You should increase your run times to a
couple minutes, and not use the confidence intervals. Some folks have
reported good results by running three consecutive netperf tests at each
window and taking the results of the middle test:
$ netperf; netperf; netperf
the idea is that you can be reasonably sure that the middle of the three
runs would have all the different tests running at the same time - some
might be on test1 of three, others on 2, maybe a couple others on 3, but
all streams shoudl be running at the same time. Caveat benchmarkor.
> - Given that this Sun VGE driver operates only at a FULL duplex mode,
> does this preclude me to the belief that only having one system
> transmitting data to another (in a single direction), that this only
> can be called 'half-duplex', thus not being allowed to consume more
> than half of the bandwidth ??? I don't believe so, but please correct
> me if I am wrong. I understood the dynamics of ethernet throughput to
> simply allow for a maximum amount of data (bandwidth) to be carried on
> the media overall (total of both sides eg. 100Mbps sending, 900 Mbps
> receiveing ; if true Gbps were achievable).
>
> This leads me into the question of Collision detection not being
> a factor (or even possible) when using 'Full Duplex' ethernet .?
> Does this mean that even though no collisions/errors are reported
> by the device driver, that collisions could still be occuring ??, or
> does this mean that collisions are physically impossible, given that
> two separate 'channels' are multiplexed over the media ??? Is this
> one of the reasons that 'Full Duplex' ethernet is usually (if not
> always) wired so that a switch is the connecting device between nodes,
> rather than allowing several nodes to transmit simultaniously without
> collision detection ?? Please enlighten me on all of the nitty-gritty
> details, as I have not yet received any 'good' explanations as of yet.
I am reasonably certain that when an interface is runn in full-cuplex
mode, the collision detection does not come into play - a strict purist
would then argue that it isn't Ethernet :)
> For the Sun folks, what are the 10 or 15 'vge' driver parameters exactly
> used for ??? Is there an internal doc or docs that describe various
> tested configurations ??? I've been told to do different things by
> both support and the configuration manuals...
>
> Please respond as soon as possible, since the customer(s) at lucent
> are becoming VERY uneasy after spending sooo... much money, not to
> get the expected bandwith from Gigabit Ethernet (unless the tools are
> deceiving us) ??
What is the expected bandwidth from Gigabit Ethernet? That may sound
like a silly question, but it is valid - Just as 100BaseT did before it,
Gigabit Ethernet does *nothing* to make data transfer any easier on the
hosts - the MTU is still the same paltry 1500 bytes, which means a given
quantity of data takes just as many packets and essentially just as many
CPU cycles as it did before.
So, if your system was nearly CPU bound (check all the CPUs in an MP,
not just the aggregate utilization...) at 100BaseT, there will be very
little gain, if any from GigE.
If people really want to see bandwith increase with CPU reduction, they
should be clamoring for FibreChannel networking - that supports up to a
60K MTU or more, which would make a very serious dent in CPU utilization
for something like a netperf TCP_STREAM test and those apps which
behaved similarly.
happy benchmarking,
rick jones
> Thanks,
>
> Todd Jobson
> Technical Manager
> Sun Professional Services
> todd.jobson@east.sun.com
> tjobson@aluxpo.micro.lucent.com
-- these opinions are mine, all mine; HP might not want them anyway... :) feel free to email, or post, but please do not do both... my email address is raj in the cup.hp.com domain...