I suspect there is some kind of deadlock and time out ocurring. Some
papers discuss a throughput deadlock situation which could happen in
TCP/IP over ATM environments. I'm not sure if this is the case because it
cannot explain why it occurs in one direction only.
To help understanding the problem more, I measured the time taken for
each individual read or write call in both directions. 10 chunks of 8KB
data were sent in each case.
Client -> Server
----------------
(server read) (client write)
1 0.000867 0.000739
2 0.000634 0.000471
3 0.000759 0.000387
4 0.002860 0.001939
5 0.000863 0.000356
6 0.000697 0.003460
7 0.001868 0.000968
8 0.000612 0.000403
9 0.001159 0.002203
10 0.000922 0.000489
----------------
Total: 0.011424 second
Server -> Client
----------------
(server write) (client read)
1 0.000707 0.052361
2 0.000948 0.043168
3 0.000406 0.050055
4 0.092609 0.050401
5 0.000859 0.049735
6 0.099258 0.049865
7 0.000422 0.049974
8 0.099437 0.050011
9 0.000362 0.049966
10 0.099599 0.050677
----------------
Total: 0.496438 second
As far as I know, the throughput deadlock is a result of the presence of
the delay acknowledgement at 200ms time intervals, which does not seem to
relate to my case. I hope someone may know something about this.
Thanks in advance,
Edwin.