Re: TCP_RR test weirdness

Rick Jones (
Thu, 15 Jul 1999 11:54:33 -0700

I'm going to guess that you are running on Linux - linux seems to have a
default "effective" MSS of 1448 bytes instead of 1460 bytes - the other
12 bytes are for TCP-LW options.

Linux also implements the Nagle algorithm incorrectly. That is the
algorithm designed to protect the network from small sends by
applications. It should be interpreted on the user's _send_, not on a
segment by segment basis.

Linux interprets it on a segment by segment basis, regardless of the
user's send size.

HP-UX correctly interprets it on a user's send basis.

In other words, if the user sends >= MSS bytes in a send call, *none* of
those bytes should be delayed by the Nagle algorithm, regardless of the
size of the last segment holding data from that send.

So, I suggest you submit a defect against Linux. (Or whatever OS you
happen to be using)

rick jones

Sam Riesland wrote:
> I am running netperf with the following command line... where $RR_SIZE is
> the size of the packets I'm sending and receiving.
> /netperf -l 5 -H astrolab3 -i 10,3 -I 99 -d -t TCP_RR -- -r $RR_SIZE
> And for some reason when I set $RR_SIZE (the send,recieve packet size) to
> between "1449,1449" and "2171,2171" the Transfer Rate drops way down to
> about 99 bytes per second from somewhere in the thousands. Also the same
> thing occurs between 4345 and 5067... and then again between 7241 and
> 7963. So every 2174 bytes there is a 722 byte block in which the Transfer
> Rate drops down to around 99 bytes. I can get rid of this problem by
> using the -D switch (TCP_NODELAY set)... but this behavior seems a bit
> strange. Is this a netperf problem? Has anyone else run into anything
> similar?

these opinions are mine, all mine; HP might not want them anyway... :)
feel free to email, or post, but please do not do both...
my email address is raj in the domain...