Date post: | 24-Dec-2015 |
Category: |
Documents |
Upload: | beverly-nash |
View: | 218 times |
Download: | 0 times |
Buffer Management for Shared-Memory ATM Switches
•Written By: Mutlu Apraci John A.Copelan
Georgia Institute of Technology
•Presented By: Yan Huang
Outline
• Describe several buffer management policies, and their strengths and weakness.
• Evaluate the performance of various policies using computer simulations
• Compare of the most import schemes
Some basic definition• The prime purpose of an ATM switch is to route
incoming cells arriving on a particular input link to the output link (switching)
• Three basic techniques are used– space-division: crossbar switch– shared-medium:based on a common high-speed bus– shared-memory
• Also have the functionality of queue– Input queuing, output queuing, shared memory
Shared-Memory Switch
• Consists of a single dual-ported memory shared by all input and output line
• Bring both switching and queuing
• Does not suffer from the throughput
degradation caused by head of the line blocking(HOL)
• The main focus is buffer allocation– determines how the total buffer spaces(memory)
will be used by individual output ports of the switches. (Cont’d)
Shared-Memory Switch (Cont’d)• The selection and implementation of the buffer
allocation policy is refereed as buffer management
• Model of the SM
switch– N output ports– M buffer space
• Performance – cell loss:
occurs when a cell
arrive at a switch
node and find the buffer is full
Buffer Allocation Policies• Stochastic assumption
– Poisson arrivals– Exponential service time
• Static Thresholds– Complete Partition
• The entire buffer space is permanently partitioned among the N servers.
• Does not provide any sharing
– Complete Sharing• An arriving packet is accepted if any space is available in the
switch memory• Independent of the server to which the packet is directed
Comparison of CP and CS• CP policy
– the buffer allocated to a port is wasted if that port is inactive, since it can not be used by other possibly active lines
• CS policy– one of the ports may monopolize most of the storage space if it is
highly utilized. • In the CS policy, a packet is lost when the common memory is full. In CP, a packet is lost when its corresponding queue has already reaches its maximum allocation.• The assumption of the traffic arrival process enable us to model the switch as a Markov process (Fig 3)
Simulation• The assumption of exponential inter arrival and
service time dist is not realistic for ATM system• The traffic in ATM networks is bursty in nature.
– To model it, use an ON/OFF source
• Simulation– mean duration of ON state = 240.
– Mean duration of OFF state = 720
– cell interarrival time =5
– Switch model has two output ports(N=2)
– The size of the shared memory is 300 cells(M=300)
– Performance metric is the cell loss ration(CLR) at the port
Performance of CS and CP
• Balanced traffic: load at the port are equal– For medium traffic load, CS achieve lower CLR
Performance of CS and CP(cont’d)
• Imbalanced traffic– The load at one port is varied, but remain
constant at the other port– CS: both port have the same CLR– CP: port buffer are isolated. CLR at port 1
increase with the traffic load
Sharing with Maximum Queue Length• SMXQ -a limit is imposed on the number of
buffers to be allocated at any time to any server.
• There is one global threshold for all the queues. The advantage of
SMXQ:– SMXQ achieves lower CLR than CP, manages to isolate the “good”port from the “bad” port. The better CLR performance is obtained with buffer sharing, the isolation is obtained by restricting
the queue length.
SMA and SMQMA• Two variation of SMXQ
– SMA (sharing with a minimum allocation)• A minimum number of buffer is always reserved for
each port. – SMQMA (sharing with a maximum queue and minimum
allocation )• each port always has access to a minimum allocated
space, but they cannot have arbitrarily long queues.– SMQMA has the following advantage over SMXQ
• A minimum space is allocated for each port in order to simplify the issue of serving high-priority traffic in a buffer-sharing environment.
Push-Out• Push-out (PO): drop-on-demand(DoD)
– A previously accepted packet can be dropped from the longest queue in the switch
• Advantage– Fair. – Efficient. – Naturally adaptive. – Achieves a lower CLR than the optimal SMXQ setting
• Drawback– Difficult to implement
Push-Out with Threshold• In ATM networks, different ports carrying different traffic type might
have different priorities. • A modification to PO, CSVP is to achieve priorities among ports.
Similar idea is called POT (push-out with threshold)• CSVP has the following attributes.
– N users share the total available buffer space M, which is virtually partitioned into N segments corresponding to the N ports
• When the buffer is full, there are two possibilities:– If the arriving cell’s type, i , occupies less space than its allocation
Ki, then, at least one other type must be occupy more than its own allocation, for instance, Kj. The admission policy will admit the newly arriving type i cell by pushing out a type j cell.
– If the arriving cell’s queue exceeds its allocation at the time of arrival, then the cell will be rejected.
• When the buffer is not full– CSVP operates as CS.– Under heavy traffic loads, the system tends to become a CP
management.
Dynamic Policies• The analyses of the buffer allocation problem above
assume static environments• Dynamic Threshold (DT)can be used to adapt to
changes in traffic conditions.– The queue length thresholds of the ports, are proportional to
the current amount of unused buffering in the switch. T(t) = a (M- Q(t))
– Cell arrivals for an output port are blocked whenever the output port’s queue length equals or exceed the current threshold value
– Major advantage of DT is to be it’s robustness to traffic load changes, a feature not present in the ST policy