Analysis of MTUs of 1500 and of 9000

with respect to RTT and Packet Size

 

 

 


John Egan

Computer Science and Engineering

University of South Florida

Tampa, FL 33620

johne87@hotmail.com

 

Abstract

This is a comparison analysis of 1500 and 9000 MTUs. Within this analysis the RTTs between and sender and receiver will be looked at as a function of packet size. The RTT is defined as the time a message is sent to the receiver and the time that it takes for the acknowledgment to reach back to the sender. Both UDP and TCP protocols will be employed in this analysis. It was found that the time it took for the sender to send a packet and receive an acknowledgment was lower at an MTU of 1500 than that of an MTU of 9000. This result was unexpected.

I will also briefly look into other previous works dealing with aspects of network traffic including packet size, message size, and throughput.

 

1. Introduction

 

In this paper, the analysis of MTUs( Maximum Transfer Units ) of 1500 and 9000 bytes have been studied with respect to RTTs( Round Trip Times ) and packet size.

A comparison among the RTTs between the TCP protocol will be made between the two MTUs. Another comparison using the UDP protocol will also take place.

Hopefully, by using an MTU of 9000 bytes, the RTTs will decrease faster than by using the MTUs of 1500 bytes. By showing that larger frame sizes can reduce RTTs and hence increase throughput, these same techniques can be applied to the internet to help further reduce congestion.

The remainder of this paper is organized as follows. Section 2 will briefly overview work that dealt with packet size, frame size, CPU utilization, message size, and throughput. Section 3 will describe the motivation of this study. Section 4 describes the methods used to evaluate the RTTs between the client and server. Section 5 will analyze and critique the effects of this study. Section 6 will present the conclusions of the analysis from Section 5.

 

 

 

2. Background

Researchers have studied and analyzed many characteristics of network traffic. I would like to briefly review some work previously done on network traffic with respect to packet size, frame size, delay, and throughput between sender and receiver.

For example, Schroeder and Burrows have studied the effects of remote procedural calls( RPCs ) on a Firefly multiprocessor[ 1 ]. In this study, they wanted to understand the latencies involved between sender and receiver in order to make the RPCs faster[ 1 ]. They have taken into consideration aspects such as CPU utilization, throughput, and packet size[ 1 ]. They have meticulously analyzed the times within each procedural call. If they could not time the event, they would count the lines of assembly code and multiply this count by the CPU's clock speed in order to obtain a time[ 1 ].

Holleczek and Baumgarten measured throughput of packets in order to analyze the effectiveness of high speeds on a X.25 network[ 2 ]. They have evaluated throughput as a function of packet size and have also considered the workload at each station and bandwidth at several switches as contributing factors[ 2 ]. They looked at packet sizes in the range from 64 to 1024 bytes[ 2 ]. This paper concludes that the throughput on a X.25 network is only limited by the current hardware not by the TCP/IP protocol[ 2 ].

Kay and Pasquale suggest that the bottleneck in networks resides in the software, that is the TCP/IP and UDP/IP protocols[ 4 ]. This paper meticulously examines operations within each protocol by varying time and packet size[ 4 ]. They have found two operations to be huge contributing factors in this software bottleneck, namely the checksum, and data movement operations[ 4 ]. They have also concluded that large maximum transmission units( MTUs ) can greatly enhance throughput[ 4 ]. Also, since the analysis of TCP and UDP protocols are very similar, "lightweight" protocols are not needed[ 4 ].

Dalton et al. believe that the bottleneck in the performance of a network is concentrated at the end stations[ 5 ]. This bottleneck is the redundant copying of data[ 5 ]. They have implemented a network card called Afterburner, to test their theory[ 5 ]. Afterburner is different from other network independent cards, since it reduces the movement of data from sender/receiver to the network[ 5 ]. They have examined throughput, packet size, and workstation performance in their analysis[ 5 ]. They have found that a relative small packet size of 14-kbytes achieves maximum throughput of 25 Mbytes per second, thus showing that the need of larger packet sizes are unnecessary[ 5 ]. They also noted that the Afterburner's architecture might scale up to a throughput that is close to gigabits per second[ 5 ].

Heatly and Stokesberry have take measurements of the transport, network, data link, and of the physical layer of the OSI model[ 6 ]. They investigated these protocols to see how suited they are to transport data[ 6 ]. They varied many parameters while taking into account the end-to-end delay and throughput[ 6 ]. Some of the parameters include processing time, message size, and packet size[ 6 ]. They found that the bottom 4 layers of the OSI model should be improved or replaced if better performance on the network is to be expected[ 6 ].

A paper by Alteon Networks[ 7 ] advocates the use of jumbo size frames, whereas a paper by 3Com( a recent Microsoft collaborator )[ 8 ] downplays their usefulness.

Alteon Networks proposes the use of jumbo size frames since the gigabit ethernet technology is here[ 7 ].

On one of their tests, CPU utilization of a 9018 byte frame is reduced by 50% versus a 1518 byte frame while at the same time increasing throughput by almost 50%[ 7 ]. They also argue for the jumbo frame to be set at 9018 by considering factors such as the frame check sequence( FCS ), memory paging systems, and block sizes of popular venues[ 7 ]. The driving force behind jumbo size frames is the higher speed networks and the potential for greater throughput of data[ 7 ].

3Com, a company that works with network related products, scoffs at the idea of the jumbo frames to alleviate server congestion[ 8 ]. They argue that the jumbo frames of 9018 bytes will cause more congestion on the internet and further degrade performance[ 8 ]. One solution to this degradation is to confine jumbo frames to a LAN, where the network is homogenous[ 8 ]. Another solution proposed by 3Com is to not focus on large frame sizes but instead to streamline existing hardware, software, and operating system design[ 8 ].

Stevens has made a reference to the work of Bellovin in which Bellovin asks if bigger packets were faster than smaller packets[ 9 ]. Bellovin cited that it depends[ 9 ]. That is, the total time it takes to send packets on the network may be dependent on the number of hops to travel, and on the congestion of the network[9].

 

3. Problem

 

Performance on the net is entering a new stage with the arrival of gigabit technology. Delay on the network does not seem to concern to many users. So some may ask why fix it if it's not broken? True, but the internet is rather young and overtime will undoubtedly incur more users.

Also, traffic on the internet is seeing more of UDP/IP datagrams. This increase may be due to communications within multimedia such as video conferencing. These UDP protocols, unlike TCP protocols, do not implement backoff( timeout ) methods. These backoff methods help to keep the internet from exhibiting severe congestion.

Also, with smaller packet sizes the client and server must endure more overhead. With larger packet sizes we can reduce this overhead.

Lastly, if possible, it would be nice to increase performance anywhere we can.

These ideas plus the promise that larger frame sizes help to minimize the RTT delays which in turn will generate higher throughput is why we have chosen to analyze the MTUs of 9000 and of 1500.

 

4. Method

 

The initial belief, which is of maximizing the frame size, will minimize the RTT and consequently will increase throughput. We can hopefully further refine the RTT by reproducing the above-mentioned experiment with a larger frame size.

The sequence of steps in gathering the times is similar to that in Kay and Pasquale's paper, section 2.0[ 4 ]. The main idea is, for each packet size, send a file size of one megabyte and record its RTT. This shall be repeated until MAX_PACKET_SIZE is reached. The above sequence will in turn be repeated for each protocol, that is, TCP and UDP. Then, the sequence just mentioned will be repeated one more time for each MTU.

 

The experiment will run on:

         two 700 MHz PC's with a paging size of 32 Kbytes,

         an Alteon 180 Gigabit Ethernet Switch,

         fiber connections, and

         MTUs of 1500 and 9000 bytes.

 

Assumptions made are:

         that traffic on the network will not be a problem since a dedicated line, provided by an Alteon 180 switch, will be in place, and

         no background programs, other than the operating systems, will run on both the sender and receiver.

         The RTT consists of the data being sent to the receiver and in turn the receiver returns the data plus its acknowledgment.

         Also, the packet size is defined as the size of the data field within each TCP/UDP datagram. This was done to simplify calculations and to take the guesswork out of re-computing each packet when more than one frame was involved.

 

The inputs include:

         packet sizes ranging from 1000 bytes to 65500 bytes at increments of 10 bytes, and a

         a file size of one Megabyte.

 

The output is the

         RTTs.

 

The procedures include:

         running a TCP and UDP client/server program[3, 10 ] for each MTU of 1500 and 9000 bytes.

         The code for the client side and for the server side for both the TCP and UDP protocols are similar. The code for the client side is presented here.

 

for( i=0; i<MAX_PACKET_SIZE; i++ )

for( j=0; j<REPEAT; j++ )

do{

//start timer

clock1 = clock() ;

 

//send packet to receiver

send( packet_size ) ;

//recv ack of sent packet from receiver

recv( acknowledgment ) ;

 

//end timer

clock2 = clock() ;

 

//decrement size of the file

buffer_size = buffer_size - packet_size ;

}while( message_buffer ) ;

 

 

There are 3 nested loops.

         Inner-most loop.

The innermost loop gets the RTTs. This is done by taking the difference between clock2 and clock1. This loop also breaks the file size down into the given packet size.

         Middle loop.

The middle loop repeats the inner loop 5 times for statistical purposes.

         Outer loop.

The outer loop increments the package size to MAX_PACKET_SIZE. The maximum size is limited by a few factors. The factors are the 16-bit Frame Check Sequence[ 7, 9 ]( found in the UDP and TCP headers ) minus the IP header length minus TCP/UDP header lengths.

For the UDP,

MAX_PACKET_SIZE = (216 - 1) - 20 - 8

= 65507.

For the TCP,

MAX_PACKET_SIZE = (216 - 1) - 20 - 20

= 65495.

These loops will run twice, once for an MTU of 1500 and once for an MTU of 9000.

 

Please note for the TCP runs:

         At MTU of 9000, no error was reported when the packet size exceeded the MAX_PACKET_SIZE. Also,

         at MTU of 1500, the outer loop stopped transmitting packets of sizes less than 3700 bytes. Therefore the comparisons of the TCP protocols will range from 1000 to 3643 at increments of 1 byte, giving us 2643 pieces of information. //check this later.

 

Also, please note for the UDP runs:

         Flow control was added in order to calculate the RTTs and to retransmit any lost packets. Flow control meaning that the receiver will return an acknowledgment. This usually not done on a UDP protocol.

 

 

 

5. Proof( by experimental results )

 

5.1 UDP/IP

 

5.1.1 MTU 9000


 

 


Figure 5.1.1.1 shows the results of the RTT, using UDP protocol, as a function of packet size. One can see that as the packet size increases the RTTs decrease. The RTT remains constant after about a packet size of 30,000 bytes. Before the packet size of 9000 bytes, one can also note a thickening of the RTTs on the graph. This may be due to the receiver's window returning data from its buffer. The receiver's window has a TCP timer that goes off at a certain time, roughly every 200 or 500ms[ 9 ]. When it goes off, all of the received messages in the buffer get returned immediately. Packets sent at separate times but return at the same time causes this fluctuation of the RTTs.

 

Figure 5.1.1.2 is a close up of figure 5.1.1.1. In it one can see a delay before the packet size of 9000 bytes and a slight delays at packet sizes of 27,000 and 32,000 bytes. The delays at packet sizes of 9000 and 27,000 bytes are caused by the start of fragmenting into frames the MTU can handle. The slight delay at packet size of 32,000 bytes suggests going to memory for another page, since pages are setup at 32 Kbytes at both the sender and the receiver. At packet size of 18,000 bytes one would expect another delay but does not. The variation in RTTs between 18,000 and 27,000 are the acknowledgments being returned in bulk from the receiver.

 

5.1.2 MTU 1500

 

Figure 5.1.2.1 shows the results of the UDP RTT as a function of packet size with the MTU set at 1500. Again as the packet size increases the RTTs decrease. Also the RTT is constant after a packet size of 30,000 bytes.

 

 

 


 

 

 


Figure 5.1.2.2 is a close up of figure 5.1.2.1. In it, one can see breaks, as expected, at multiples of 1500 bytes starting just before the packet sizes of 3000, 6000, 9000, and 12000. One can see the extra time taken to break these packets down into multiple frames. However, similar jumps in the RTT should have occurred just before packet sizes of 4500, 7500, 10500, and 13500. Instead, the RTTs unexpectedly sped up.

 

 

5.1.3 Comparison( MTU 9000 & MTU 1500 )

 

Figure 5.1.3.1 shows the comparison of UDP RTTs between MTUs of 9000 and of 1500. For instance, at packet size of 11000 bytes the RTT of an MTU of 9000 bytes equals 0.1782ms and that of an MTU of 1500 bytes equals 0.0962ms. The corresponding ratio equals 1.8524. Therefore the RTT delay is larger at the MTU of 9000 bytes.

 


 

 


This figure clearly shows the RTT takes longer at an MTU of 9000 than of that at 1500 bytes. This was an unexpected result. The bottleneck may reside within the buffers at the receiver. The buffer may fill up before the TCP timer within the kernel expires. Only when the TCP timer expires can the receiver acknowledge the sender[ 9 ]. In the mean time, the sender is sending more packets and the receiver just throws them away since the receiver's buffers cannot handle the extra load. Hence, the sender must re-send any lost packets not acknowledged by the receiver, thus incurring more time.

 

5.2 TCP/IP

 

5.2.1 MTU 9000

 


 


Figures 5.2.1.1 and 5.2.1.2 show the results of the RTT, using the TCP protocol, as a function of packet size. Figures 5.2.1.1 and 5.2.1.2 are almost identical to figures 5.1.1.1 and 5.1.1.2. The only noticeable difference is in the wild variance in the RTT just before the packet size of 63000 bytes. One would have expected more of a jump at packet size multiples of 9000 bytes such as that at the packet size of 9000 bytes. However, these events did not occur or were to minute to observe.

 

5.2.2 MTU 1500

 

Figure 5.2.2.1 shows the RTT versus packet size using the TCP protocol at an MTU of 1500 bytes. Again, as the packet size increases, the RTT to deliver a message of 1Mbyte decreases. This run was retried several times from packets sizes of 1000 bytes to that of 65549 bytes. Each time the program froze for some time. Our group's maximum threshold of waiting for the sender to be acknowledged by the receiver was no more than twenty minutes. Perhaps the TCP time-out mechanism was initiated and kept backing off, thus increasing it's time-out period each time. So, this experiment was re-ran from a packet size of 1000 bytes until a packet size of 3643. The packet size of 3643 bytes was the last packet recorded before the program hung. To collect as much data as possible we reduced the step size from ten bytes to one byte.

In figure 5.2.2.1, one can see a slight jump before the packet size of 1500 bytes. This is indicative of breaking a packet into two frames. Another jump in the RTT delay was expected around a packet size of 3000 bytes, but there was nothing noticeable to report except for the cluster of RTTs.

Again, the wide range of values between packet sizes of 1000 and 1500 bytes and between packet sizes of 2300 and 2900 bytes may be the result of bulk data transfers from the receiver[ 9 ]. However, the variance in the RTTs between packet sizes of 1500 and of 2300 is much smaller.


 

 

 


5.2.3 Comparison( MTU 9000 & MTU 1500 )

 


Figure 5.2.3.1 like figure 5.1.3.1 shows the comparison of TCP RTTs between MTUs of 9000 and 1500 bytes. Please go to section 5.1.3.1 to see a computed example. There is really insufficient data here to draw a conclusion over a wide range of packet sizes. However with the present data, it would seem that the RTTs are larger for a frame size of 9000 bytes. This may be due to the slow TCP timer within the receiver's kernel as cited in 5.1.3. In any case, one would expect to always see a ratio below one.

 

 

6. Conclusions and Future Work

 

Naturally, one would think that sending more data without increasing the overhead would require less time and therefore obtain a higher throughput. However, the contrary has been noted.

For both the TCP and UDP protocols, the frame size of 1500 bytes had the smaller RTTs then that of the frame size of 9000 bytes. As cited in 5.1.3, this may be due to the slow timer in the receiver's kernel. If the timer were fast enough to allow the receiver's buffer to remain partially empty then there would always be enough room to allow more data to be received.

Future work may be to re-test these experiments on a homogenous network, but with light, moderate, and heavy traffic. Also, one may wish to fine-tune this experiment by diving into more of the underlying mechanisms. Then have times for each mechanism sum up to each RTT reported. This fine-tuning may help to explain the steps found in figures 5.1.2.2 and 5.2.2.1. Also if the culprit is the timer within the kernel, tests could be run to find a more satisfying time to relinquish the receiver's buffer.

 

 

Acknowledgments

 

I would like to thank Dr. Ken Christensen, Hiroshi Fujinoki, Nan Zhang, Sumit Birla, Amar Nayegandhi, and Aamir Shaikh of the University of South Florida for their helpful suggestions.

 

References

 

[ 1 ] Schroeder, Michael D. and Michael Burrows. "Performance of Firefly RPC." ACM Transactions on Computer Systems. 8 No.1( February 1990 ) : 1-17.

[ 2 ] Holleczek, P. and T. Baumgarten. "Throughput measurements in a 2 Mbps X.25 network." Computer Networks and ISDN Systems. 25( November 1992 ) : 351-356.

[ 3 ] Schildt, Herbert. Windows 98 Programming from the Ground Up. Berkeley, California : Osbourne McGraw-Hill, 1998.

[ 4 ] Kay, Jonathan and Joseph Pasquale. "A Performance Analysis of TCP/IP and UDP/IP Networking Software for the DECstation 5000." IEEE Network. 7 No.4( July 1993 ) : 1-21.

[ 5 ] Dalton, Chris and et al. "Afterburner." IEEE Communications Magazine. 27 No.6( June 1989 ) : 36-43.

[ 6 ] Heatley, Sharon and Dan Stokesberry. "Analysis of Transport Measurements Over a Local Area Network." IEEE Communications Magazine. 27 No.6( June 1989 ) : 16-22.

[ 7 ] "Extended Frame Sizes for Next Generation Ethernets." Alteon Networks, The Server Switching Company. Accessed April 17, 2000. http://www.alteonwebsystems.com/products/white_papers/jumbo/index.shtml

[ 8 ] "Server Congestion and the Premise of Jumbo Frames." 3Com's Gigabit Ethernet White Paper: Gigabit Ethernet Comes of Ages. Accessed April 17, 2000. http://www.3com.com/technology/tech_net/tech_briefs/500902.html.

[ 9 ] Stevens, W. Richard. TCP/IP Illustrated, Volume 1, the Protocols. Reading, Massachusetts : Addison-Wesley Publishing Company, February, 1998.

[ 10 ] Stevens, W. Richard. Unix Network Programming, Networking APIs : Sockets and XT1, Volume 1. Upper Saddle River, NJ : Prentice-Hall, Inc., 1998.