End-to-end delay is a very important QoS parameter. There are a number of factors that contribute to the amount of delay experienced by a packet as it tra- verses through a network: forwarding delay, queueing delay, propagation delay and serialization delay. When scheduling algorithms are discussed it is only the queu- ing delay that is of our interest. It denotes the amount of time that a packet has to wait in a queue as the system performs statistical multiplexing and while other packets are serviced before it can be transmitted on the output port. Jitter is the variation in delay over time experienced by consecutive packets that are part of the same flow.
Given multiple packets awaiting transmission through an output link of a network device, it is the function of a scheduler and its algorithm to determine the exact order in which these packets are going to be trans- mitted. Congestion occurs when packets arrive to the input ports faster then they can be transmitted on the output ports. The amount of delay introduced by con- gestion can range anywhere from the amount of time that it takes to transmit the last bit of the previously arriving packet to infinity, where the packet is dropped due to the buffer exhaustion. A scheduler should have a maximum delay limit (latency), low computational cost (complexity), an easy implementation, it also has to be efficient and fair.
Since the delay is a very important transmission parameter, a scheduling algorithm of a scheduler should provide end-to-end delay guarantees for individual
flows† without severely under-utilizing network resources. One component of the end-to-end delay is also the latency of the observed scheduler and we will study it closely in this paper. The notion of latency, that is going to be used here, is based on the length of time it takes a new flow to begin receiving service at its reserved rate. Therefore, latency is directly rel- evant to the size of the playback buffers required in real-time applications.
They all differ in the manner in which they calculate the global virtual time function. Generally, they give good fairness and low latency bound but they have great computational complexity so they are not ef- ficient to implement. On the other hand, in frame based schedulers time is split into frames of fixed or variable length. Reservations are made in terms of the maximum amount of traffic the flow is allowed to transmit during a frame period. For some of them this frame size can vary too, so that the server can not stay idle if flows transmit less traffic than their reservations over the duration of the frame. In this frame based schedulers the scheduler simply visits all non empty queues in a round robin order. The service received by a flow in one round robin opportunity is proportional to its fair share of the bandwidth. These schedulers do not have to perform sorting among pack- ets and do not have to maintain global virtual time function, so they have lower computational complex- ity than the sorted priority schedulers. Deficit Round Robin (DRR), Surplus Deficit Round Robin, Elastic Round Robin, Nested Round Robin are some of the frame based schedulers with complexity O(1), but they have worse fairness and latency properties than the sorted priority schedulers.
In 1996 Shreedhar and Varghese [4] proposed one of the most popular frame-based scheduling algorithms. The main characteristic of all Deficit Round Robin (DRR) like scheduling algorithms is their ability to provide guaranteed service rates for each flow (queue). DRR services flows in a strict round-robin order. It has complexity O(1) and it is easy to implement. Its latency is comparable to other frame-based schedulers. A detailed operation of DRR algorithm can be found in [4].
LATENCY
Stiliadis and Varma in [5] defined a general class of schedulers, called Latency-Rate (LR) servers. The be- havior of an LR server is determined by two parameters – the latency and the allocated rate. Intuitively latency of an LR server is the worst-case delay seen
by the first packet of a busy flow. That is the packet
arriving when the flow’s queue is empty.
The latency of particular scheduling policy may de- pend on its internal parameters: its transmission rate on outgoing link, the number of flows sharing the link and their allocated rates.
In this definition of LR servers, they have not made any assumption on whether the server is based on a fluid model or a packet-by-packet model. The only re- quirement is that a packet is not considered as depart- ing the server until its last bit has departed. Therefore, packet departures must be considered as impulses. DRR algorithm satisfies all of these assumptions.
The authors also developed and defined the notion of latency of a scheduling algorithm and determined an upper bound on the latency for a number of schedulers
that belong to a class of LR servers. This notion of
latency is based on the length of time it takes a new
flow to begin receiving service at its guaranteed rate.
Using the general idea of Stiliadis and Varma in [5] we derive the upper latency bound for DRR algorithm, which is different from theirs. It is also different from the bounds derived in [3]. We show that our upper bound is mathematically correct in contrary to the ones derived in [5] and [3]. More detailed analysis is given in [1] and [2].
Let us first define active and busy periods of a flow.
Definition 1 An active period of a flow is the maxi- mal interval of time during which the flow has at least one packet awaiting service or in service.
Definition 2 A busy period of a flow is the maximal time interval during which the flow would be active if served exactly at its reserved rate.
Active period reflects the actual behavior of the sched- uler since the service offered to flow varies depending on the number of active flows. Busy period is a math- ematical construction that tells us how long would a flow be active if served at exactly its reserved rate.
CONCLUSION
Stiliadis and Varma in [5] defined a general class of schedulers, called Latency-Rate (LR) servers. Using the general idea of Stiliadis and Varma in [5] and of Shreedhar and Varghese in [4], we have derived the upper latency bound for DRR scheduling algorithm. We have show that our upper bound is unique regardless of the approach used. We have derived the same result using two different approaches (see [2] for details). It should be mentioned that even though we have used the same ideas as Stiliadis and Varma in [5] and Shreedhar and Varghese in [4], we have made some changes in the derivation leading to the new and mathematically correct latency bound.
Нема коментара:
Постави коментар