On the Modeling and Analysis of the RTT

 

 

 


John Egan

Computer Science and Engineering

University of South Florida

Tampa, FL  33620

johne87@hotmail.com

 

Abstract

 

Several aspects of the Round Trip Time( RTT ) have been studied.  Some areas of interest include the amount of delay of RTT of packets incur during a 24-hour period.  The variation of the RTT among different sites can vary wildly.  Also, a brief look into other works, which include the study of RTTs, will be briefly examined.

 

 

1. Introduction

 

          The Round Trip Time, from here on out shall be referred to as the RTT, is the time it takes for a packet to leave its source, travel to its destination, and then return back to its source.  In this paper, I wish to look at traffic patterns of the internet during the course of an average workday by observing the RTTs of a 1008 byte packet.  In particular, I wish to examine the RTTs of packets of varying hop lengths and to examine any packet losses during these trips.

          The remainder of this paper is organized as follows.  Section 2 will briefly overview work that dealt with the RTTs by others.  Section 3 describes what motivated this study.  Section 4 describes the methods that are used to test the RTT of packets.  Section 5 will analyze and critique the affects of the RTTs of the packets.  Section 6 will present the conclusions of the analysis from Section 5.

 

2. Background

 

          Researchers have used the RTT of many networks towards diminishing delay. I will briefly mention a few studies here.

          For example, Charalambos studied the effects of high bandwidth delay and noise on satellite transmissions.  In so doing, he investigated the throughput of these transmissions by varying parameters such as the RTT, the BER( bit error rates ), and the three TCP protocols[ 1 ].  He found that TCP SACK had the best throughput over the TCP Reno and the TCP New Reno[ 1 ].  However in practice, the differences between the TCP New Reno and the TCP SACK is minimal[ 1 ].

          Kung and Wang looked at ways to improve credit-based ATM flow control[ 2 ].  One aspect of their study looked at the allocation of a “switch” buffer size during the request and response protocols between a client and a server[ 2 ].  In order to increase performance at the ATM switch with respect to the buffer size, they picked the smallest value of the multiplication of RTT and the bandwidth( RTT*bw ) or of the multiplication of the number of clients and the reply size[ 2 ].  That is, the

min( RTT*bw, #clients*replySize )[ 2 ].  The performance is in terms of maximum throughput and minimal delay.

          Carter and Crovella looked at ways to increase throughput from several replicated sites[ 3 ].  They considered replicated sites since these sites contained the same information and are distributed throughout the world[ 3 ].  They looked at ways of improving upon statically determined methods of choosing the best site such as geographical locations and the number of hops and by also considering the RTT and bandwidth[ 3 ].  They later proposed an alogorithm based solely on the RTT to search out the best site to communicate with[ 3 ].

          Henderson, Sahouria, McCanne, and Katz looked at two ways of promoting fairness among packets with long RTTs[ 4 ].  They first investigated Floyd’s “Constant-Rate”( CR ) window algorithm[ 4 ].  This algorithm works well in a homogeneous network; that is, if and only if all queues implement the same CR algorithm[ 4 ].  A second way of promoting fairness is to use an increase-by-K( IBK ) policy, instead of the increment-by-one policy[ 4 ].  This IBK technique allows packet length to increase faster in size, thus enabling it send more information on each trip[ 4 ].  It also maintains fairness among other packets at each hop while increasing the size of each packet by K segments[ 4 ].

         

 

3. Problem

         

       My initial belief that the time of day would be irrelevant to the various traffic on the web.  The internet is a global community, not governed by time nor place.  Yet, during certain times of the day, downloads and uploads and surfing the web in general can take a monumental amount of time, especially with a 56k modem.  But, there seem to be also off-times, where file transfers happen in a reasonable amount of time.  My initial assumption is that early morning hours is the best time to do any surfing or large file transfers.  The evening hours would be the worst times.

 

 

 

4. Method

         

          I would like to go over the inputs and procedures used in this experiment. 

          I wanted to minimized some of the various parameters as much as possible for this experiment.  For instance, I choose sites only in the eastern time zone to keep the time-of-day parameter constant. 

 

The inputs include:

·         1008 byte packets

·         4 web sites:

www.maine.edu – University of Maine,

www.mit.edu – Mass. Institue of technology,

www.gatech.edu – Georgia Tech., and

www.suny.edu  - State Univ. of New York.

         

The procedures include:

·         running the ‘ping’ command on each site ten times, every hour for 24 hours and

·         running the ‘traceroute’ command once to each site.

 

These two commands ran from europa.dyns.cx.  This machine had the ‘traceroute’ command and more ping options.  The ‘traceroute’ command was used to collect the hop information

 

The output includes:

·         the number of hops from each university, and

·         960( 4x10x24 ) pieces of RTT data from each ping.

 

After acquiring the data, I will plot 3 graphs.  One is the RTT vs. the Time Of Day.  The other is the Standard Deviation of the RTT vs. the Time Of Day.  The third graph will examine the percent of packet loss with each RTT.

 

 

 

 

5. Proof( by experimental results )

 

5.1 RTT vs. Time Of Day

 

          Graph 5.1 shows the results of RTT vs the time of day. 

 

 

 

 

 

 


 


Figure 5.1( RTT vs. timeOfDay )

 

          In it, one can see the best time to do any browsing is between 4 to 8am.  The worst time is after 10pm.  One can also see that the travel distance( recorded by the number of hops ) does not play a part in determining the RTT.  In fact, one with the most hops( 19 at www.gatech.edu ) had the fastest response time and the least number of hops( 14 at MIT ) had the slowest response time.

          Other parameters which would play a key role in the delay, but not examined could be the amount of hits the site receives.  The bandwidth of the media, and the queueing scheme within each router can handle only a finite number of customers efficiently.  MIT, in particular, seems to receive many hits, where Georgia Tech recieves the fewest.

          Looking at this data, one might think that the geography might play a role.  MIT the furthest away has the longest RTT and Georgia Tech, the closest, has the smallest response time.  However, the time traveled in the wire connecting the routers, is negligible.

 

5.2 Standard Deviation vs RTT/# of Hops

 

Figure 5.2 shows the standard deviation among the four sites.  I have eliminated the hops variable by dividing it against the RTT.

Since MIT has the largest RTT, I took measures of the std. dev; one with MIT and one without MIT.  The

deviation without MIT varied within say 5% of each other.  This shows the traffic pattern among these universities to be very similar.  However, including MIT, the deviation varied a lot, especially during business hours.  This may be due to outside companies or other universities accessing data from MIT.  After the business day has finished( including that on the West Coast ), then between 9pm and 8am the pattern models that of the other universities.

 


Figure 5.2( std.dev vs. timeOfDay )

 

 

 


5.3 Packet Loss

 

Figure 5.3 shows the percent of packet loss vs. the

time of day.  By comparing Fig  5.1 to that of Fig 5.3 one can see that as the RTT increases, the packet loss will also increase.  The “snafoo” at the University of Maine at around 3pm in Fig 5.3 is

 


 


Fig 5.3( %packetLoss vs. timeOfDay )

 

representative one of the dangers of collecting a handful of data points.  In this case, 10 data points for that time and place were collected.

 

 

6. Conclusions and Future Work

         

This paper took a look at the RTT of 1008 byte packets pinging four universities. I have furthered looked at the time of day these events took place as well as the corresponding packet losses.   My initial view that the time of day did NOT play a factor was wrong as my experiments have shown.  Not every one in the world has a way to connect to the internet as compared with the inundated East Coast. 

Another result from this experiment shows that with a high RTT comes with it a higher percentage of packet loss. 

Lastly, regardless of geographical distance or the number of hops away a particular site may be, the time of day is the determining factor in internet traffic.

Future work may include re-running this experiment with a larger set of data.  This way, unexpected spikes, such as in Fig 5.3 may be minimized.

 

Acknowledgments

 

I would like to thank Dr. Ken Christensen, Hiroshi Fujinoki, Nan Zhang, Sumit Birla, Raji Ramachandran, and Zornitza Geneva of the University of South Florida for their helpful suggestions.

 

References

 

[ 1 ]   Charalambos, Charalambous P., and Victo S. Frost. “Performance of TCP Extensions on Noisy High BDP Networks.” IEEE Communication Letters 3.10( October 1999 ):  294-296.

[ 2 ]   Kung, H.T. and S.Y.Wang.  “Client-Server Performance on Flow-Controlled ATM Networks:  A Web Database of Simulation Results.”  IEEE INFOCOM 3( 1997 ):  1218-1226.

[ 3 ]   Carter, Robert L., and Mark E. Crovella. “Server Selection Using Dynamic Path Characterization in Wide-Area Networks.”  IEEE INFOCOM 3( 1997 ):  1014-1021.

[ 4 ]   Henderson, Thomas R., et al.  “On Improving The Fairness of TCP Congestion Avoidance.”  IEEE GLOBECOM 1( 1998 ):  539-544.