Wednesday, September 4, 2019
Analysis of QoS Parameters
Analysis of QoS Parameters Chapter 3 3. Analysis of QoS Parameters 3.1 Introduction A Number of QoS [11] of parameters can be measured and monitored to determine whether a service level offered or received is being achieved. These parameters consist of the following 1. Network availability 2. Bandwidth 3. Delay 4. Jitter 5. Loss 3.1.1 Network Availability Network availability can have a consequential effect on QoS. Simply put, if the network is not available, even during short periods of time, the user or application may achieve unpredictable or undesirable performance (QoS) [11]. Network availability is the summation of the availability of many items that are used to create a network. These include network device redundancy, e.g. redundant interfaces, processor cards or power supplies in routers and switches, resilient networking protocols, multiple physical connections, e.g. fiber or copper, backup power sources etc. Network operators can increase their networks availability by implementing varying degrees of each item. 3.1.2 Bandwidth Bandwidth is one of the most important QoS parameter. It can be divided in to two types 1. Guaranteed bandwidth 2. Available bandwidth 3.1.2.1 Guaranteed bandwidth Network operators offer a service that provides minimum BW and burst BW in the SLA. Because the guaranteed BW the service costs higher as compare to the available BW service. So the service providers must ensure the special treatment to the subscribers who have got the guaranteed BW service. The network operator separates the subscribers by different physical or logical networks in some cases, e.g., VLANs, Virtual Circuits, etc. In some cases, the guaranteed BW service traffic may share the same network infrastructure with available BW service traffic. We often use to see the case at location where network connections are expensive or the bandwidth is leased from another service provider. When subscribers share the same network infrastructure, the subscribers of the guaranteed BW service must get the priority over the available BW subscribers traffic so that in times of networks congestion the guaranteed BW subscribers SLAs are met. Burst BW can be specified in terms of amount and du ration of excess BW (burst) above the guaranteed minimum. QoS mechanism may be activated to avoid or discard traffic that use consistently above the guaranteed minimum BW that the subscriber agreed to in the SLA. 3.1.2.2 Available bandwidth As we know network operators have fixed Bandwidth, but to get more return on the investment of their network infrastructure, they oversubscribe the BW. By oversubscribing the BW a user is subscribed to be no always available to them. This allows users to compete for available BW. They get more or less BW it depends upon the amount of traffic form other users on the network at any given time. Available bandwidth is a technique commonly used over consumer ADSL networks, e.g., a customer signs up for a 384-kbps service that provides no QoS (BW) guarantee in the SLA. The SLA points out that the 384-kbps is standard but does not make any guarantees. Under lightly loaded conditions, the 384-kbps BW will be available to the users but upon network loaded condition, this BW will not be available consistently. It can be noticed during certain times of the day when number of users access the network. 3.1.3 Delay Network delay is the transit time an application experiences from the ingress (entering) point to the egress (exit) point of the network. Delay can cause significant QoS issues with application such as Video conferencing and fax transmission that simply time-out and final under excessive delay conditions. Some applications can compensate for small amounts of delay but once a certain amount is exceeded, the QoS becomes compromised. For example some networking equipment can spoof an SNA session on a host by providing local acknowledgements when the network delay would cause the SNA session to time out. Similarly, VoIP gateways and phones provide some local buffering to compensate for network delay. There can be both fixed and variable delays. Examples of fixed delays are: Application based delay, e.g., voice codec processing time and IP packet creation time by the TCP/IP software stack Data transmission (queuing delay) over the physical network media at each network hop. Propagation delay across the network based on transmission distance Examples of variable delays are: â⬠¢ Ingress queuing delay for traffic entering a network node â⬠¢ Contention with other traffic at each network node â⬠¢ Egress queuing delay for traffic exiting a network node 3.1.4 Jitter (Delay Variation) Jitter is the difference in delay presented by different packets that are part of the same traffic flow. High frequency delay variation is known as jitter and the low frequency delay variation is known as wander. Primary cause of jitter is basically the differences in queue wait times for consecutive packets in a flow and this is the most significant issue for QoS. Traffic types especially real time traffic such as video conferencing can not tolerate jitter. Differences in packet arrival times cause in the voice. All transport system exhibit some jitter. As long as jitter limits below the defined tolerance level, it does not affect service quality. 3.1.5 Loss Loss either bit errors or packet drops has a significant impact on VoIP services as compare to the data services. During the transmission of the voice, loss of multiple packets may cause an audible pop that will become irritating to the user. Now as compare to the voice transmission, in data transmission loss of single bit or multiple packets of information will not effect the whole communication and is almost never noticed by users. In case of real time video conferencing, consecutive packet loss may cause a momentary glitch (defect) on the screen, but the video then proceeds as before. However, if packet drops get increase, then the quality of the transmission degrades. For minimum quality rate of packet loss must be less than 5% and less then 1% for toll quality. When the network node will be congested, it will drop the packets and by this the loss will occur. TCP (Transmission Control Protocol) is one of the networking protocols that offer packets loss protection by the retransmission of packets that may have been dropped by the network. When network congestion will be increased, more packets will be dropped and hence there will be more TCP transmission. If congestion continues the network performance will obviously degrade because much of the BW is being used for the retransmission of dropped packets. TCP will eventually reduce its transmission window size, due to this reduction in window size smaller packets will be transmitted; this will eventually reduce congestion, resulting in fewer packets being dropped. Because congestion has a direct influence on packet loss, congestion avoidance mechanism is often deployed. One such mechanism is called Random Early Discard (RED). RED algorithms randomly and intentionally drop packets once the traff ic reaches one or more configured threshold. RED provides more efficient congestion management for TCP-based flows. 3.1.5.1 Emission priorities It determines the order in which traffic is transmitted as it exits a network node. Traffic with higher emission priority is transmitted a head of traffic with a lower emission priority. Emission priorities also determine the amount of latency introduced to the traffic by the network nodes queuing mechanism. For example, email which is a delay tolerant application will get the lower emission priority as compare to the delay sensitive real time applications such as voice or video. These delay sensitive applications can not be buffered but are being transmitted while the delay tolerant applications may be buffered. In a simple way we can say that emission priorities use a simple transmit priority scheme whereby higher emission priority traffic is always transmitted ahead of lower emission priority traffic. This is typically accomplished using strict priority scheduling (queuing) the downside of this approach is that low emission priority queues may never get services (starved) it there is always higher emission priority traffic with no BW rate limiting. A more detailed scheme provides a weighted scheduling approach to the transmission of the traffic to improve fairness, i.e., the lower emission priority traffic is transmitted. Finally, some emission priority schemes provide a mixture of both priority and weighted schedulers. 3.1.5.2 Discarded priorities Are used to determine the order in which traffic gets discarded. Due to the network congestion packets may be get dropped i.e., the traffic exceeds its prescribed amount of BW for some period of time. When the network will be congested, traffic with a higher discard priority will get drop as compare to the traffic with a lower discard priority. Traffic with similar QoS performance can be sub divided using discard priorities. This allows the traffic to receive the same performance when the network node is not congested. However, when the network node gets congested, the discard priority is used to drop the more suitable traffic first. Discard priorities also allow traffic with the same emission priority to be discarded when the traffic is out of profile. With out discard priorities traffic would need to be separated into different queues in a network node to provide service differentiation. This can be expensive since only a limited number of hardware queues (typically eight or less) are available on networking devices. Some devices may have software based queues but as these are increasingly used, network node performance is typically reduced. With discard priorities, traffic can be placed in the same queue but in effect the queue is sub divided into virtual queues, each with a different discard priority. For example if a product supports three discard priorities, then one hardware queues in effect provides three QoS Levels. Performance Dimension Application Bandwidth Sensitivity to Delay Jitter Loss VoIP Low High High Medium Video Conf High High High Medium Streaming Video on Demand High Medium Medium Medium Streaming Audio Low Medium Medium Medium Client Server Transaction Medium Medium Low High Email Low Low Low High File Transfer Medium Low Low High Table 3.1: Application performance dimensions (use histogram) Table 3.1 illustrates the QoS performance dimensions required by some common applications. Applications can have very different QoS requirements. As these are mixed over a common IP transport network, without applying QoS the network traffic will experience unpredictable behavior. 3.2 Categorizing Applications Networked applications can be categorized based on end user application requirements. Some applications are between people while other applications are a person and a networked device application, e.g., a PC and web server. Finally, some networking devices, e.g., router-to-router. Table 3.2 categorizes applications into four different traffic categories: 1. Network Control 2. Responsive 3. Interactive 4. Timely Traffic Category Example Application Network Control Critical Alarm, routing, billing ETC. Responsive Streaming Audio/Video, Client/Server Transaction Interactive VoIP, Interactive gaming, Video Conferencing Timely Email, Non Critical Table 3.2: Application Categorization 3.2.1 Network Control Applications Some applications are used to control the operations and administration of the network. Such application include network routing protocols, billing applications and QoS monitoring and measuring for SLAs. These applications can be subdivided into those required for critical and standard network operating conditions. To create high availability networks, network control applications require priority over end user applications because if the network is not operating properly, end user application performance will suffer. 3.2.2 Responsive applications Some applications are between a person and networked devices applications to be responsive so a quick response back to the sender (source) is required when the request is being sent to the networking device. Sometimes these applications are referred to as being near real time. These near real time applications require relatively low packet delay, jitter and loss. However QoS requirements for the responsive applications are not as stringent as real time, interactive application requirements. This category includes streaming media and client server web based applications. Streaming media application includes Internet radio and audio / video broadcasts (news, training, education and motion pictures). Streaming applications e.g. videos require the network to be responsive when they are initiated so the user doesnt wait for long time before the media begins playing. For certain types of signaling these applications require the network to be responsive also. For example with movie on deman d when a user changes channels or forward, rewinds or pause the media user expects the application to react similarly to the response time of there remote control. The Client / server web applications typically involve the user selecting a hyperlink to jump from one page to another or submit a request etc. These applications also require the network to be responsive such that once the hyperlink to be responsive such that once the hyperlink is selected, a response. This can be achieved over a best effort network with the help of broadband internet connection as compare to dial up. Financial transaction may be included in these types of application, e.g., place credit card order and quickly provide feedback to the user indicating that either the transaction has completed or not. Otherwise the user may be unsure to initiate a duplicate order. Alternatively the user may assume that the order was placed correctly but it may not have. In either case the user will not be satisfied with the network or applications performance. Responsive applications can use either UDP or TCP based transport. Streaming media applications typically use UDP because in UDP it would not be fruitful to retransmit the data. Web based applications are based on the hypertext transport protocol and always use TCP, for web based application packet loss can be managed by transmission control protocol (TCP) which retransmit lost packets. In case of retransmission of lost streaming media is sufficiently buffered. If not then the lost packets are discarded. This results in the form of distortion in media. 3.2.3 Interactive Applications Some applications are interactive whereby two or more people communicate or participate actively. The participants expect the real time response from the networked applications. In this context real time means that there is minimal delay (latency) and delay variations (jitter) between the sender and receiver. Some interactive applications, such as a telephone call, have operated in real time over the telephone companies circuit switched networks for over 100 years. The QoS expectations for voice applications have been set and therefore must also be achieved for packetized voice such as VoIP. Other interactive applications include video conferencing and interactive gaming. Since the interactive applications operate in real time, packet loss must be minimized. Interactive applications typically are UDP based (Universal Datagram Protocol) and hence cannot retransmit lost or dropped packets as with TCP based applications. However it would not be beneficial to retransmit the packets because interactive applications are time based. For example if a voice packet was lost. It doesnt make sense to retransmit the packet because the conservations between the sender and receiver have already progressed and the lost packet might be from part of the conversation that has already passed in time. 3.2.4 Timely Applications There are some applications which do not require real time performance between a person and networked devices application but do require the information to be delivered in a timely manner. Such example includes save and send or forward email applications and file transfer. The relative importance of these applications is based on their business priorities. These applications require that packets arrive with abounded amount of delay. For example, if an email takes few minutes to arrive at its destination, this is acceptable. However if we consider it in a business environment, if an email takes 10 minutes to arrive at its destination, this will often not acceptable. The same bounded delay applies to file transfer. Once a file transfer is initiated, delay and jitter are illogical because file transfer often take minutes to complete. It is important to note that timely applications use TCP based transport instead of UDP based transport and therefore packet loss is managed by TCP which r etransmit any lost packets resulting in no packet loss. By summarizing above paragraph we can say that timely applications expect the network QoS to provide packets with a bounded amount of delay not more than that. Jitter has a negligible effect on these types of applications. Loss is reduced to zero due to TCPs retransmission mechanism. 3.3 QoS Management Architecture We can divide QoS management architecture of VoIP into two planes: data plane and control plane. Packet classification, shaping, policing, buffer management, scheduling, loss recovery, and error concealment are involved in the mechanism of data plane. They implement the actions the network needs to take on user packets, in order to enforce different class services. Mechanisms which come in control plane are resource provisioning, traffic engineering, admission control, resource reservation and connection management etc. 3.3.1 Data Plane 3.3.1.1 Packet Forwarding It consists of Classifier, Marker, Meter, Shaper / Dropper. When a packet is received, a packet classifier is used to determine which flow or class the packet belongs to. Those packets belong to the same flow/class obey a predefined rule and are processed in an alike manner. The basic criteria of classification for VoIP applications could be IP address, TCP/UDP port, IP precedence, protocol, input port, DiffServ code points (DSCP), or Ethernet 802.1p class of service (CoS). Cisco supports several additional criteria such as access list and traffic profile. The purpose of the meter is to decide whether the packet is in traffic profile or not. The Shaper/Dropper drops the packets which crossed the limits of traffic profile to bring in conformance to current network load. A marker is used to mark the certain field in the packet, such as DS field, to label the packet type for differential treatment later. After the traffic conditioner, buffer is used for packet storage that waits for transmission. 3.3.1.2 Buffer Management and Scheduling Active queue management (RED) drops packets before the repletion of the queue can avoid the problem of unfair resource usage. Predictable queuing delay and bandwidth sharing can be achieved by putting the flows into different queues and treating individually. Schedulers of this type can not be scaled as overhead increases as the number of on-going traffic increases. Solution is class-based schedulers such as Constraint Based WFQ and static Priority which schedule traffic in a class-basis fashion. But for the individual flow it would be difficult to get the predictable delay and bandwidth sharing. So care must be taken to apply this to voice application which has strict delay requirements. 3.3.1.3 Loss Recovery We can classify loss recovery into two ways one is Active recovery and the other is Passive recovery. We have retransmission in Active recovery and Forward Error Correction (Adding redundancy) in passive recovery. Retransmission may not be suitable for VoIP because of it latency of packets increases. 3.3.2 Control Plane 3.3.2.1 Resource provisioning and Traffic Engineering Refers to the configuration of resources for applications in the network. In industry, main approach of resource provisioning is over provisioning, abundantly providing resources. Factors that make this attractive are cost of bandwidth and network planning, cost of bandwidth in the backbone is decreasing day by day and network planning is becoming simpler. 3.3.2.2Traffic Engineering It mainly focuses to keep the control on network means to minimize the over-utilization of a particular portion of the network while the capacity is available elsewhere in the network. The two methods used to provide powerful tools for traffic engineering are Multi-Protocol Label Switching (MPLS) and Constraint Based Routing (CBR). These are the mechanisms through which a certain amount of network resources can be reserved for the potential voice traffic along the paths which are determined by Constraint Based Routing or other shortest path routing algorithms. 3.3.2.3 Admission Control Admission control is used to limit the resource usage of voice traffic within the amount of the specified resources. There is no provision of admission control in IP networks so it can offer only best effort service. Parameter based Admission Control provides delay guaranteed service to applications which can be accurately described, such as VoIP. In case of bursty traffic, it is difficult to describe traffic characteristics which makes this type to overbook network resources and therefore lowers network utilization. To limit the amount of traffic over any period it uses explicit traffic descriptors (typical example is token bucket). Different algorithms used in parameter based admission control are: ÃâÃÅ" Ciscos resource reservation based (RSVP). ÃâÃÅ" Utilization based (compares with a threshold, based on utilization value at runtime it decides to admit or reject). ÃâÃÅ" Per-flow end-to-end guaranteed delay service (Computes bandwidth requirements and compares with available resource to make decision). ÃâÃÅ" Class-based admission control. 3.4 Performance Evaluation in VoIP applications 3.4.1 End-To-End Delay When End to End delay exceeds a certain value, the interactive ness becomes more like a half-duplex communication. There can be of two type of delay: 1) Delays due to processing and transmission of speech 2) Network delay (delay that is the result of processing with in the system) Network delay = Fixed part + variable part Fixed part depends upon the performance of the network nodes on the transmission path, transmission and propagation delay and the capacity of links between the nodes. Variable part is the time spent in the queues which depends on the network load. Queuing delay can be minimized by using the advanced scheduling mechanisms e.g. Priority queuing. IP packet delay can be reduced by sending shorter packets instead of longer packets. Useful technique for voice delay reduction on WAN is link fragmentation and interleaving. Fragment the lower packet into smaller packets and between those small packets VOICE packets are sent. 3.4.2 Delay Jitter Delay variation, also known as jitter, creates hurdle in the proper reconstruction of voice packets in their original sequential form. It is defined as difference in total end-to-end delay of two consecutive packets in the flow. In order to remove jitter, it requires collecting and storing packets long enough to permit the slowest packets to arrive in order to be played in the correct sequence. Solution is to employ a play out buffer at the receiver to absorb the jitter before outputting the audio stream. Packets are buffered until their scheduled play out time arrives. Scheduling a later deadline increases the possibility of playing out more packets and results in lower loss rate, but at the cost of higher buffering delay. Techniques for Jitter Absorption â⬠¢ Setting the same play out time for all the packets for entire session or for the duration of each session. â⬠¢ Adaptive adjusting of play out time during silence periods regarding to current network â⬠¢ Constantly adapting the play out time for each packet, this requires the scaling of voice packets to maintain continued play out. 3.4.3 Frame Eraser (F.E) It actually happens at that time when the IP packet carrying speech frame does not arrive at the receiver side in time. There may be loss of single frame or a block of frames. Techniques used to encounter the frame erasure â⬠¢ Forward Error Correction (requires additional processing) depends on the rate and distribution of the losses. â⬠¢ Loss concealment (replaces lost frames by playing the last successfully received frame) effective only at low loss rate of a single frame. High F.E and delays can become troublesome because it can lead to a longer period of corrupt voice. The speech quality perceived by the listener is based on F.E levels that occur on the exit from the jitter buffer after the Forward Error Correction has been employed. To reduce levels of frame loss, Assured forwarding service helps to reduce network packet loss that occur because of full queues in network nodes. 3.4.4 Out of Order Packet Delivery This type of problem occurs in the complex topology where number of paths exists between the sender and the receiver. At the receiving end the receiving system must rearrange received packets in the correct order to reconstruct the original speech signal. Techniques for OUT-OF-ORDER PACKET DELIVERY It is also done by Jitter buffer whose functionality now became â⬠¢ Re-ordering out of order packets ( based on sequence number) â⬠¢ Elimination of Jitter Analysis of QoS Parameters Analysis of QoS Parameters Chapter 3 3. Analysis of QoS Parameters 3.1 Introduction A Number of QoS [11] of parameters can be measured and monitored to determine whether a service level offered or received is being achieved. These parameters consist of the following 1. Network availability 2. Bandwidth 3. Delay 4. Jitter 5. Loss 3.1.1 Network Availability Network availability can have a consequential effect on QoS. Simply put, if the network is not available, even during short periods of time, the user or application may achieve unpredictable or undesirable performance (QoS) [11]. Network availability is the summation of the availability of many items that are used to create a network. These include network device redundancy, e.g. redundant interfaces, processor cards or power supplies in routers and switches, resilient networking protocols, multiple physical connections, e.g. fiber or copper, backup power sources etc. Network operators can increase their networks availability by implementing varying degrees of each item. 3.1.2 Bandwidth Bandwidth is one of the most important QoS parameter. It can be divided in to two types 1. Guaranteed bandwidth 2. Available bandwidth 3.1.2.1 Guaranteed bandwidth Network operators offer a service that provides minimum BW and burst BW in the SLA. Because the guaranteed BW the service costs higher as compare to the available BW service. So the service providers must ensure the special treatment to the subscribers who have got the guaranteed BW service. The network operator separates the subscribers by different physical or logical networks in some cases, e.g., VLANs, Virtual Circuits, etc. In some cases, the guaranteed BW service traffic may share the same network infrastructure with available BW service traffic. We often use to see the case at location where network connections are expensive or the bandwidth is leased from another service provider. When subscribers share the same network infrastructure, the subscribers of the guaranteed BW service must get the priority over the available BW subscribers traffic so that in times of networks congestion the guaranteed BW subscribers SLAs are met. Burst BW can be specified in terms of amount and du ration of excess BW (burst) above the guaranteed minimum. QoS mechanism may be activated to avoid or discard traffic that use consistently above the guaranteed minimum BW that the subscriber agreed to in the SLA. 3.1.2.2 Available bandwidth As we know network operators have fixed Bandwidth, but to get more return on the investment of their network infrastructure, they oversubscribe the BW. By oversubscribing the BW a user is subscribed to be no always available to them. This allows users to compete for available BW. They get more or less BW it depends upon the amount of traffic form other users on the network at any given time. Available bandwidth is a technique commonly used over consumer ADSL networks, e.g., a customer signs up for a 384-kbps service that provides no QoS (BW) guarantee in the SLA. The SLA points out that the 384-kbps is standard but does not make any guarantees. Under lightly loaded conditions, the 384-kbps BW will be available to the users but upon network loaded condition, this BW will not be available consistently. It can be noticed during certain times of the day when number of users access the network. 3.1.3 Delay Network delay is the transit time an application experiences from the ingress (entering) point to the egress (exit) point of the network. Delay can cause significant QoS issues with application such as Video conferencing and fax transmission that simply time-out and final under excessive delay conditions. Some applications can compensate for small amounts of delay but once a certain amount is exceeded, the QoS becomes compromised. For example some networking equipment can spoof an SNA session on a host by providing local acknowledgements when the network delay would cause the SNA session to time out. Similarly, VoIP gateways and phones provide some local buffering to compensate for network delay. There can be both fixed and variable delays. Examples of fixed delays are: Application based delay, e.g., voice codec processing time and IP packet creation time by the TCP/IP software stack Data transmission (queuing delay) over the physical network media at each network hop. Propagation delay across the network based on transmission distance Examples of variable delays are: â⬠¢ Ingress queuing delay for traffic entering a network node â⬠¢ Contention with other traffic at each network node â⬠¢ Egress queuing delay for traffic exiting a network node 3.1.4 Jitter (Delay Variation) Jitter is the difference in delay presented by different packets that are part of the same traffic flow. High frequency delay variation is known as jitter and the low frequency delay variation is known as wander. Primary cause of jitter is basically the differences in queue wait times for consecutive packets in a flow and this is the most significant issue for QoS. Traffic types especially real time traffic such as video conferencing can not tolerate jitter. Differences in packet arrival times cause in the voice. All transport system exhibit some jitter. As long as jitter limits below the defined tolerance level, it does not affect service quality. 3.1.5 Loss Loss either bit errors or packet drops has a significant impact on VoIP services as compare to the data services. During the transmission of the voice, loss of multiple packets may cause an audible pop that will become irritating to the user. Now as compare to the voice transmission, in data transmission loss of single bit or multiple packets of information will not effect the whole communication and is almost never noticed by users. In case of real time video conferencing, consecutive packet loss may cause a momentary glitch (defect) on the screen, but the video then proceeds as before. However, if packet drops get increase, then the quality of the transmission degrades. For minimum quality rate of packet loss must be less than 5% and less then 1% for toll quality. When the network node will be congested, it will drop the packets and by this the loss will occur. TCP (Transmission Control Protocol) is one of the networking protocols that offer packets loss protection by the retransmission of packets that may have been dropped by the network. When network congestion will be increased, more packets will be dropped and hence there will be more TCP transmission. If congestion continues the network performance will obviously degrade because much of the BW is being used for the retransmission of dropped packets. TCP will eventually reduce its transmission window size, due to this reduction in window size smaller packets will be transmitted; this will eventually reduce congestion, resulting in fewer packets being dropped. Because congestion has a direct influence on packet loss, congestion avoidance mechanism is often deployed. One such mechanism is called Random Early Discard (RED). RED algorithms randomly and intentionally drop packets once the traff ic reaches one or more configured threshold. RED provides more efficient congestion management for TCP-based flows. 3.1.5.1 Emission priorities It determines the order in which traffic is transmitted as it exits a network node. Traffic with higher emission priority is transmitted a head of traffic with a lower emission priority. Emission priorities also determine the amount of latency introduced to the traffic by the network nodes queuing mechanism. For example, email which is a delay tolerant application will get the lower emission priority as compare to the delay sensitive real time applications such as voice or video. These delay sensitive applications can not be buffered but are being transmitted while the delay tolerant applications may be buffered. In a simple way we can say that emission priorities use a simple transmit priority scheme whereby higher emission priority traffic is always transmitted ahead of lower emission priority traffic. This is typically accomplished using strict priority scheduling (queuing) the downside of this approach is that low emission priority queues may never get services (starved) it there is always higher emission priority traffic with no BW rate limiting. A more detailed scheme provides a weighted scheduling approach to the transmission of the traffic to improve fairness, i.e., the lower emission priority traffic is transmitted. Finally, some emission priority schemes provide a mixture of both priority and weighted schedulers. 3.1.5.2 Discarded priorities Are used to determine the order in which traffic gets discarded. Due to the network congestion packets may be get dropped i.e., the traffic exceeds its prescribed amount of BW for some period of time. When the network will be congested, traffic with a higher discard priority will get drop as compare to the traffic with a lower discard priority. Traffic with similar QoS performance can be sub divided using discard priorities. This allows the traffic to receive the same performance when the network node is not congested. However, when the network node gets congested, the discard priority is used to drop the more suitable traffic first. Discard priorities also allow traffic with the same emission priority to be discarded when the traffic is out of profile. With out discard priorities traffic would need to be separated into different queues in a network node to provide service differentiation. This can be expensive since only a limited number of hardware queues (typically eight or less) are available on networking devices. Some devices may have software based queues but as these are increasingly used, network node performance is typically reduced. With discard priorities, traffic can be placed in the same queue but in effect the queue is sub divided into virtual queues, each with a different discard priority. For example if a product supports three discard priorities, then one hardware queues in effect provides three QoS Levels. Performance Dimension Application Bandwidth Sensitivity to Delay Jitter Loss VoIP Low High High Medium Video Conf High High High Medium Streaming Video on Demand High Medium Medium Medium Streaming Audio Low Medium Medium Medium Client Server Transaction Medium Medium Low High Email Low Low Low High File Transfer Medium Low Low High Table 3.1: Application performance dimensions (use histogram) Table 3.1 illustrates the QoS performance dimensions required by some common applications. Applications can have very different QoS requirements. As these are mixed over a common IP transport network, without applying QoS the network traffic will experience unpredictable behavior. 3.2 Categorizing Applications Networked applications can be categorized based on end user application requirements. Some applications are between people while other applications are a person and a networked device application, e.g., a PC and web server. Finally, some networking devices, e.g., router-to-router. Table 3.2 categorizes applications into four different traffic categories: 1. Network Control 2. Responsive 3. Interactive 4. Timely Traffic Category Example Application Network Control Critical Alarm, routing, billing ETC. Responsive Streaming Audio/Video, Client/Server Transaction Interactive VoIP, Interactive gaming, Video Conferencing Timely Email, Non Critical Table 3.2: Application Categorization 3.2.1 Network Control Applications Some applications are used to control the operations and administration of the network. Such application include network routing protocols, billing applications and QoS monitoring and measuring for SLAs. These applications can be subdivided into those required for critical and standard network operating conditions. To create high availability networks, network control applications require priority over end user applications because if the network is not operating properly, end user application performance will suffer. 3.2.2 Responsive applications Some applications are between a person and networked devices applications to be responsive so a quick response back to the sender (source) is required when the request is being sent to the networking device. Sometimes these applications are referred to as being near real time. These near real time applications require relatively low packet delay, jitter and loss. However QoS requirements for the responsive applications are not as stringent as real time, interactive application requirements. This category includes streaming media and client server web based applications. Streaming media application includes Internet radio and audio / video broadcasts (news, training, education and motion pictures). Streaming applications e.g. videos require the network to be responsive when they are initiated so the user doesnt wait for long time before the media begins playing. For certain types of signaling these applications require the network to be responsive also. For example with movie on deman d when a user changes channels or forward, rewinds or pause the media user expects the application to react similarly to the response time of there remote control. The Client / server web applications typically involve the user selecting a hyperlink to jump from one page to another or submit a request etc. These applications also require the network to be responsive such that once the hyperlink to be responsive such that once the hyperlink is selected, a response. This can be achieved over a best effort network with the help of broadband internet connection as compare to dial up. Financial transaction may be included in these types of application, e.g., place credit card order and quickly provide feedback to the user indicating that either the transaction has completed or not. Otherwise the user may be unsure to initiate a duplicate order. Alternatively the user may assume that the order was placed correctly but it may not have. In either case the user will not be satisfied with the network or applications performance. Responsive applications can use either UDP or TCP based transport. Streaming media applications typically use UDP because in UDP it would not be fruitful to retransmit the data. Web based applications are based on the hypertext transport protocol and always use TCP, for web based application packet loss can be managed by transmission control protocol (TCP) which retransmit lost packets. In case of retransmission of lost streaming media is sufficiently buffered. If not then the lost packets are discarded. This results in the form of distortion in media. 3.2.3 Interactive Applications Some applications are interactive whereby two or more people communicate or participate actively. The participants expect the real time response from the networked applications. In this context real time means that there is minimal delay (latency) and delay variations (jitter) between the sender and receiver. Some interactive applications, such as a telephone call, have operated in real time over the telephone companies circuit switched networks for over 100 years. The QoS expectations for voice applications have been set and therefore must also be achieved for packetized voice such as VoIP. Other interactive applications include video conferencing and interactive gaming. Since the interactive applications operate in real time, packet loss must be minimized. Interactive applications typically are UDP based (Universal Datagram Protocol) and hence cannot retransmit lost or dropped packets as with TCP based applications. However it would not be beneficial to retransmit the packets because interactive applications are time based. For example if a voice packet was lost. It doesnt make sense to retransmit the packet because the conservations between the sender and receiver have already progressed and the lost packet might be from part of the conversation that has already passed in time. 3.2.4 Timely Applications There are some applications which do not require real time performance between a person and networked devices application but do require the information to be delivered in a timely manner. Such example includes save and send or forward email applications and file transfer. The relative importance of these applications is based on their business priorities. These applications require that packets arrive with abounded amount of delay. For example, if an email takes few minutes to arrive at its destination, this is acceptable. However if we consider it in a business environment, if an email takes 10 minutes to arrive at its destination, this will often not acceptable. The same bounded delay applies to file transfer. Once a file transfer is initiated, delay and jitter are illogical because file transfer often take minutes to complete. It is important to note that timely applications use TCP based transport instead of UDP based transport and therefore packet loss is managed by TCP which r etransmit any lost packets resulting in no packet loss. By summarizing above paragraph we can say that timely applications expect the network QoS to provide packets with a bounded amount of delay not more than that. Jitter has a negligible effect on these types of applications. Loss is reduced to zero due to TCPs retransmission mechanism. 3.3 QoS Management Architecture We can divide QoS management architecture of VoIP into two planes: data plane and control plane. Packet classification, shaping, policing, buffer management, scheduling, loss recovery, and error concealment are involved in the mechanism of data plane. They implement the actions the network needs to take on user packets, in order to enforce different class services. Mechanisms which come in control plane are resource provisioning, traffic engineering, admission control, resource reservation and connection management etc. 3.3.1 Data Plane 3.3.1.1 Packet Forwarding It consists of Classifier, Marker, Meter, Shaper / Dropper. When a packet is received, a packet classifier is used to determine which flow or class the packet belongs to. Those packets belong to the same flow/class obey a predefined rule and are processed in an alike manner. The basic criteria of classification for VoIP applications could be IP address, TCP/UDP port, IP precedence, protocol, input port, DiffServ code points (DSCP), or Ethernet 802.1p class of service (CoS). Cisco supports several additional criteria such as access list and traffic profile. The purpose of the meter is to decide whether the packet is in traffic profile or not. The Shaper/Dropper drops the packets which crossed the limits of traffic profile to bring in conformance to current network load. A marker is used to mark the certain field in the packet, such as DS field, to label the packet type for differential treatment later. After the traffic conditioner, buffer is used for packet storage that waits for transmission. 3.3.1.2 Buffer Management and Scheduling Active queue management (RED) drops packets before the repletion of the queue can avoid the problem of unfair resource usage. Predictable queuing delay and bandwidth sharing can be achieved by putting the flows into different queues and treating individually. Schedulers of this type can not be scaled as overhead increases as the number of on-going traffic increases. Solution is class-based schedulers such as Constraint Based WFQ and static Priority which schedule traffic in a class-basis fashion. But for the individual flow it would be difficult to get the predictable delay and bandwidth sharing. So care must be taken to apply this to voice application which has strict delay requirements. 3.3.1.3 Loss Recovery We can classify loss recovery into two ways one is Active recovery and the other is Passive recovery. We have retransmission in Active recovery and Forward Error Correction (Adding redundancy) in passive recovery. Retransmission may not be suitable for VoIP because of it latency of packets increases. 3.3.2 Control Plane 3.3.2.1 Resource provisioning and Traffic Engineering Refers to the configuration of resources for applications in the network. In industry, main approach of resource provisioning is over provisioning, abundantly providing resources. Factors that make this attractive are cost of bandwidth and network planning, cost of bandwidth in the backbone is decreasing day by day and network planning is becoming simpler. 3.3.2.2Traffic Engineering It mainly focuses to keep the control on network means to minimize the over-utilization of a particular portion of the network while the capacity is available elsewhere in the network. The two methods used to provide powerful tools for traffic engineering are Multi-Protocol Label Switching (MPLS) and Constraint Based Routing (CBR). These are the mechanisms through which a certain amount of network resources can be reserved for the potential voice traffic along the paths which are determined by Constraint Based Routing or other shortest path routing algorithms. 3.3.2.3 Admission Control Admission control is used to limit the resource usage of voice traffic within the amount of the specified resources. There is no provision of admission control in IP networks so it can offer only best effort service. Parameter based Admission Control provides delay guaranteed service to applications which can be accurately described, such as VoIP. In case of bursty traffic, it is difficult to describe traffic characteristics which makes this type to overbook network resources and therefore lowers network utilization. To limit the amount of traffic over any period it uses explicit traffic descriptors (typical example is token bucket). Different algorithms used in parameter based admission control are: ÃâÃÅ" Ciscos resource reservation based (RSVP). ÃâÃÅ" Utilization based (compares with a threshold, based on utilization value at runtime it decides to admit or reject). ÃâÃÅ" Per-flow end-to-end guaranteed delay service (Computes bandwidth requirements and compares with available resource to make decision). ÃâÃÅ" Class-based admission control. 3.4 Performance Evaluation in VoIP applications 3.4.1 End-To-End Delay When End to End delay exceeds a certain value, the interactive ness becomes more like a half-duplex communication. There can be of two type of delay: 1) Delays due to processing and transmission of speech 2) Network delay (delay that is the result of processing with in the system) Network delay = Fixed part + variable part Fixed part depends upon the performance of the network nodes on the transmission path, transmission and propagation delay and the capacity of links between the nodes. Variable part is the time spent in the queues which depends on the network load. Queuing delay can be minimized by using the advanced scheduling mechanisms e.g. Priority queuing. IP packet delay can be reduced by sending shorter packets instead of longer packets. Useful technique for voice delay reduction on WAN is link fragmentation and interleaving. Fragment the lower packet into smaller packets and between those small packets VOICE packets are sent. 3.4.2 Delay Jitter Delay variation, also known as jitter, creates hurdle in the proper reconstruction of voice packets in their original sequential form. It is defined as difference in total end-to-end delay of two consecutive packets in the flow. In order to remove jitter, it requires collecting and storing packets long enough to permit the slowest packets to arrive in order to be played in the correct sequence. Solution is to employ a play out buffer at the receiver to absorb the jitter before outputting the audio stream. Packets are buffered until their scheduled play out time arrives. Scheduling a later deadline increases the possibility of playing out more packets and results in lower loss rate, but at the cost of higher buffering delay. Techniques for Jitter Absorption â⬠¢ Setting the same play out time for all the packets for entire session or for the duration of each session. â⬠¢ Adaptive adjusting of play out time during silence periods regarding to current network â⬠¢ Constantly adapting the play out time for each packet, this requires the scaling of voice packets to maintain continued play out. 3.4.3 Frame Eraser (F.E) It actually happens at that time when the IP packet carrying speech frame does not arrive at the receiver side in time. There may be loss of single frame or a block of frames. Techniques used to encounter the frame erasure â⬠¢ Forward Error Correction (requires additional processing) depends on the rate and distribution of the losses. â⬠¢ Loss concealment (replaces lost frames by playing the last successfully received frame) effective only at low loss rate of a single frame. High F.E and delays can become troublesome because it can lead to a longer period of corrupt voice. The speech quality perceived by the listener is based on F.E levels that occur on the exit from the jitter buffer after the Forward Error Correction has been employed. To reduce levels of frame loss, Assured forwarding service helps to reduce network packet loss that occur because of full queues in network nodes. 3.4.4 Out of Order Packet Delivery This type of problem occurs in the complex topology where number of paths exists between the sender and the receiver. At the receiving end the receiving system must rearrange received packets in the correct order to reconstruct the original speech signal. Techniques for OUT-OF-ORDER PACKET DELIVERY It is also done by Jitter buffer whose functionality now became â⬠¢ Re-ordering out of order packets ( based on sequence number) â⬠¢ Elimination of Jitter
Subscribe to:
Post Comments (Atom)
-
The Advantages and Disadvantages of Cross-sections and Longitudinal Research for Measuring Life Course Changes - Assignment Example As se...
-
Frankenstein, by Mary Shelley, comprises and exemplifies many signature Romantic tropes. Though Shelley may integrate gothic elements into h...
-
Genetically Modified foods Persuasion Essay Essay Many are not aware of the ongoing debate of whether or not products in grocery stores ac...
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.