+ All Categories
Home > Documents > Broadband Communications Part I

Broadband Communications Part I

Date post: 18-Nov-2015
Category:
Upload: -
View: 16 times
Download: 1 times
Share this document with a friend
Description:
current trend in computing
Popular Tags:
35
OVERVIEW This topic describes the general principles and structures of two advanced high speed backbone network technologies, and gives you the chance to start evaluating their potential. WeÊll start our discussion by defining what we mean by Âhigh speed networksÊ, and then weÊll introduce the two advanced high speed networks. The first, Gigabit Ethernet, is an upgraded version of the 802.3 T T o o p p i i c c 2 2 OBJECTIVE On completion of this topic, you should be able to: 1. Describe the main features of Gigabit Ethernet. 2. List and describe the advantages of using Gigabit Ethernet. 3. Discuss the main features and advantages of ATM. 4. Explain the concept of statistical multiplexing and multiplexing gain. 5. Outline the layered implementation of Broadband Integrated Service Digital Networks (B-ISDN) over ATM. 6. Describe the ATM cell header structure and the virtual path concept. 7. Describe the service classes supported by ATM and explain the functions of different types of AAL protocols. 8. Discuss how ATM can support IP. 9. Describe the concept of IP over SONET. Broadband Communications (Part I)
Transcript
  • OVERVIEW

    This topic describes the general principles and structures of two advanced high speed backbone network technologies, and gives you the chance to start evaluating their potential. Well start our discussion by defining what we mean by high speed networks, and then well introduce the two advanced high speed networks. The first, Gigabit Ethernet, is an upgraded version of the 802.3

    TTooppiicc

    22OBJECTIVE

    On completion of this topic, you should be able to:

    1. Describe the main features of Gigabit Ethernet.

    2. List and describe the advantages of using Gigabit Ethernet.

    3. Discuss the main features and advantages of ATM.

    4. Explain the concept of statistical multiplexing and multiplexing gain.

    5. Outline the layered implementation of Broadband Integrated Service Digital Networks (B-ISDN) over ATM.

    6. Describe the ATM cell header structure and the virtual path concept.

    7. Describe the service classes supported by ATM and explain the functions of different types of AAL protocols.

    8. Discuss how ATM can support IP.

    9. Describe the concept of IP over SONET.

    Broadband Communications (Part I)

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 22

    Ethernet. Gigabit Ethernet has the advantage of simple backward compatibility with older LAN technologies. In the remainder of this topic, well focus our study on a newer technology Asynchronous Transfer Mode (ATM) which has been adopted as the supporting technology for future Broadband Integrated Service Digital Networks (B-ISDN). Since ATM is very different from existing LAN technology, well spend some time considering its main features and benefits. Youll learn about the B-ISDN layered model based on ATM, and the header structure of ATM cells is then described. Youll be introduced to the concept of virtual paths that simplify call admission and network management. After that, the ATM adaptation layer, which converts the higher layer service to the ATM layer, is discussed. The service classes supported by ATM networks are introduced next, and the issues of traffic management and control are investigated. Then, well illustrate how to integrate IP with ATM. Well draw a simple comparison between the Gigabit Ethernet and ATM, and introduce a competing technology, IP over SONET. Finally, well briefly overview how to offer IP/ATM integrated services over xDSL technologies. The self-tests in this topic are designed to help you understand the concepts and performance of high speed networks. The summary and glossary at the end of this topic should help you as you work through this topic and as you do your revision.

    INTRODUCTION

    We begin our discussion here in this topic with high speed backbone networks, the infrastructure to support the growing bandwidth-greedy applications. First, we need to ask an obvious but important question: what are high speed networks? Well, if you think about it, you will agree that the answer changes with time. Recall that modems in the 1980s were still running at speeds of 300~1200 bps, and networks (in the early 1980s) carrying a data rate of around 10 Mbps were considered to be high speed. In fact, networks such as Ethernet could easily support hundreds of users in a local area with applications such as email and file transfer.

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 23

    The introduction of the World Wide Web (WWW) in the 1990s, however, caused explosive growth in data traffic over the Internet. The old high speed networks were no longer sufficient to support the continuous growth in traffic volume. In addition, there were increasing demands to support multimedia applications over long distances; for example, to set up a video conference between two parties in the US and Hong Kong. To support these new types of services (such as video), advanced compression techniques to pack the data but also new high speed networks to transfer the information needed to be developed. Table 2.1 below tracks how the meaning of high speed networks has evolved over time. This is similar to the evolution of computers, in which faster and faster CPUs like 8086, 80286, and so on, have been developed until today, when the Pentium III has captured the market.

    Table 2.1: The meaning of high speed networks changes with the times

    Year High speed network

    1960s 64 kbps

    1970s 1 Mbps

    1980s 10 Mbps

    Early 1990s 100 Mbps

    Today 1 Gbps

    2005 > 10 Gbps

    Youll be introduced to two currently-emerging high speed networks in this topic: Gigabit Ethernet and Asynchronous Transfer Mode (ATM). These new networking technologies have enjoyed a great deal of attention in the LAN and WAN arenas, respectively. Youll be introduced to the basics of each of these high speed networks here in the study topic, and then referred to articles and Web pages for a more detailed description. In our first topic, however, we take a closer look at the idea of a high speed backbone network.

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 24

    HIGH SPEED BACKBONE NETWORKS

    There is a growing demand for multimedia applications over networks. In fact, entertainment is the major driving force for bandwidth-hungry applications. One day-to-day example is interactive TV (iTV) service, which actually comprises MPEG-2 video streams with a bandwidth requirement of about 3 Mbps. (Since MPEG-2 is a variable bit rate coding standard ranging from 1.5 Mbps to 9.8 Mbps, 3 Mbps is only a typical value.) Now, suppose you live in a residential building with 100 families subscribing to iTV services. The backbone network between this residential building and the video service centre providing the programming will require a bandwidth of at least 300 Mbps. If there are 1000 subscribers, the bandwidth requirement jumps to 3 Gbps. You can see, therefore, that the backbone networks of the future must have sufficient capacity to handle this growing traffic demand. Lets look at another example. The use of network-licensed software is very common in large corporations and organizations (such as the OUHK). Network-licensed software can reduce costs and provide flexibility. When network traffic is heavily localized (as it is at the OUHK campus), a single high speed LAN (e.g., fast Ethernet) can easily support hundreds of users. Now suppose we extend the concept of using network-licensed software a step further: consider a very basic computer running applications, via high speed connections, located on remote powerful servers. This is basically the idea of Network Computers (NC). Although the success of NCs is still debatable, its potential application cannot simply be ignored. Again, such an application will require high speed backbone networks to interconnect users and servers spread over wide areas. The above discussion suggests that new high speed networks are needed to support the growing traffic. If we agree this is the case, then network managers must always face some challenging questions: is it worth upgrading the existing network with new technology? What is the technology to use? Is the new technology an evolution or a revolution of existing networks? What are the short-term and long-term costs and returns for a new technology? In the following topics, you will learn about two emerging high speed networks: Gigabit Ethernet and ATM. Gigabit Ethernet is basically a speeded-up version of the legacy Ethernet LAN. On the other hand, ATM is a completely new networking technology, which aims to revolutionize existing networks. As time progresses, Gigabit Ethernet and ATM have found their own market positions in LAN and WAN environments, respectively. We shall start our discussion with Gigabit Ethernet, since most of you have already encountered Ethernet in your study of other computer networks.

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 25

    GIGABIT ETHERNET (IEEE 802.3Z)

    Ethernet is by far the most popular LAN: industry estimates indicate that over 50 million Ethernet nodes have been installed worldwide. Given Ethernets great popularity, engineers are working hard to improve the speed of Ethernet without making significant changes to the existing hardware it runs on. This has led to the development of a fast version of Ethernet which has quickly captured the 100 Mbps LAN market. With the increase in multimedia applications over networks, however, even 100 Mbps networks have still been unable to keep pace with the rapid growth in traffic volume. This has provided the motivation for the design of networks with speeds beyond 100 Mbps. Yet because of the overwhelming success of Ethernet (more than 85% of all installed network connections are Ethernet), engineers wanted to push the speed of Ethernet to 1 Gbps. This led to the new Gigabit Ethernet standard IEEE 802.3z.

    What is Gigabit Ethernet?

    Gigabit Ethernet is basically an Ethernet running at a speed of 1 Gbps. Gigabit Ethernet uses the same Ethernet frame format and media access control technology (CSMA/CD) as other members of the 802.3 Ethernet family. It also uses the same 802.3 full-duplex Ethernet technology and 802.3 flow control. The whole design philosophy behind Gigabit Ethernet is to maintain compatibility with the many millions of existing Ethernet and Fast Ethernet connections installed worldwide. As is usually the case for other emerging technologies, however, there is no single reference to provide the whole picture for Gigabit Ethernet. Therefore, you need to read from different sources to assemble the necessary range of background information. Please begin by reading the following three articles.

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 26

    Moorthy, V (1997) Survey of Gigabit Ethernet, University of Ohio Although this paper was written before the approval of the current standard (1997), it provides a solid background on the development and basic concept of Gigabit Ethernet.

    Introduction to Gigabit Ethernet, Technical Document from Cisco Systems http://www.cisco.com/warp/public/cc/techno/media/ lan/gig/tech/ gigbt_tc.htm This brief article provides a technical overview of Gigabit Ethernet, including its architecture, standards and topologies.

    READING 2.1

    READING 2.2

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 27

    After the above readings, it is time to test your understanding of the material. Please do your best to answer the following questions.

    White paper: Gigabit Ethernet Overview updated May 1999, Gigabit Ethernet Alliance http://www.10gea.org/Tech-whitepapers .htm The Gigabit Ethernet Alliance is an open forum whose purpose is to promote industry cooperation in the development of Gigabit Ethernet. This white paper provides a brief overview of Gigabit Ethernet, including the advantages of Gigabit Ethernet, Gigabit Ethernet technology fundamentals and network migration scenarios. As Gigabit Ethernet is still under development, you are encouraged to visit the Gigabit Ethernet Alliance website for updated information. There are also a couple of optional readings you might want to look up if you want more details or are particularly interested in this subject:

    (Optional) FAQ about Gigabit Ethernet, Gigabit Ethernet Alliance, http://www.10gea.org/tech-faqs.htm This website contains common FAQ about Gigabit Ethernet. Interested readers can refer to them for further information.

    (Optional) Philippe Ginier-Gillet and Christopher T Di Minico, 1000BASE-T: Gigabit Ethernet Over Category 5 Copper Cabling, 3Com

    This paper provides the technical understanding of the fundamentals of 1000BASE-T - Gigabit Ethernet run over Category 5 Copper Cable.

    READING 2.3

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 28

    After the successful introduction of Gigabit Ethernet, the industry is pushing a step further extending bandwidth of Ethernet to 10 Gbits. A new consortium known as 10 Gigabit Ethernet Alliance (www.10gea.org) is formed to support the advancement of the 10 Gigabit Ethernet standard. Please refer to 10 Gigabit Ethernet Alliance for further information. Reading (Optional) 10 Gigabit Ethernet An Introduction, 10 Gigabit Ethernet Alliance This paper introduces the information on the technologies used by emerging standard for 10 Gigabit Ethernet. After reading the assigned articles you may already have concluded that Gigabit Ethernet is a great solution for broadband networking. You are quite right in a certain sense. Due to the low cost and success of Ethernet and fast Ethernet, it is expected that Gigabit Ethernet will quickly come to dominate the backbone market. However, Gigabit Ethernet is currently still limited to LAN environments because the CSMA/CD protocols are not scalable to high speed wide area networks (WANs). Lets look at this issue in a bit more detail. There are two fundamental criteria for the speed and efficiency of networks:

    1. Delay: the roundtrip time for transmitting information between the transmitter and the receiver;

    2. Bandwidth: the number of bits transmitted per second.

    1. What are the main advantages of using Gigabit Ethernet?

    2. Does Gigabit Ethernet follow the CSMA/CD protocol? What are some of the new features added in Gigabit Ethernet?

    3. What are the transmission media used in Gigabit Ethernet? Discuss the possible use of UTP to support Gigabit Ethernet.

    4. How would you rate the performance of Gigabit Ethernet in terms of throughput efficiency?

    ACTIVITY 2.1

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 29

    The bbandwidth delay product (BBDP) is the product of these two parameters. In the data link layer, BDP represents the maximum amount of allowed unacknowledged data inside a network. According to IEEE 802.3 specifications (10 base-T), this allows for a total cable length of 2500 m with four repeaters. The worst-case signal propagation delay is 50 ms, which corresponds to a frame of size 500 bits (= 10 Mbps 50 ms) at 10 Mbps. As network speed increases, the minimum frame length must increase or the maximum cable length must decrease proportionally. Now consider a network operating at 1 Gbps over a distance of 250 km and a propagation delay of 5 ms. If we use the CSMA/CD protocol, the minimum frame size will become (1 Gbps 5 ms) = 5 Mbits, which is unreasonably large. Technically speaking, the protocol is limited by the large bandwidth-delay products. In high speed WANs, this kind of feedback control protocol is inefficient. Recent rapid increases in multimedia applications such as video and voice over the networks have introduced various applications that impose different performance requirements on the network bearer service, which is known as Quality of Service (QQoS). In general, QoS is measured in terms of delays, bandwidth and packet loss. However, Ethernet was originally designed to transfer data that were not being delivered in real time. Despite the enhanced speed of Gigabit Ethernet, the underlying CSMA/CD protocol still does not provide any rigid boundaries on the maximum delays for transmitted data. To provide necessary delay guarantees, new protocols must be added to existing protocols. One such new protocol is RSVP (Resource reSerVation Protocol), which was developed to enable these data networks to support real-time applications. The issues of providing Internet QoS will be discussed in other topic. All of these additional efforts were made because existing datagram networks were not designed to support real time traffic. Instead of adding new features to enhance existing networks, a more revolutionary solution is to invent new networks to support multi-type service. This led to the development of Broadband Integrated Services Digital Networks (BB-ISDN), which will be discussed in the following topic.

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 30

    ASYNCHRONOUS TRANSFER MODE (ATM)

    At present, separate networks are built for specific purposes such as telephone, television, data, and so on. Each network has its own protocols, equipment and transmission facilities. All of these networks cost money and have associated maintenance overheads. However, the objective of all these networks is the same: to transfer information. Integrating these networks together that is, creating a single network to cope with all the different service requirements might reduce these costs. Broadband Integrated Service Digital Networks (B-ISDN) was conceived as an all-purpose digital network to achieve this aim. What about the Internet? you may ask. We can transfer files, send email, talk with others using IP phone and watch Real Video over the Internet. Is the Internet going to be the B-ISDN? The answer is: yes and no. There is no doubt that the Internet is the dominant network in the world, and that it can achieve most of the aims listed for the B-ISDN. However, the Internet, since it is a datagram network, is not designed to support the Quality of Service required in real-time applications such as video. For instance, as you likely know from experience, the quality of Real Video is still far from satisfactory compared to TV. New protocols have been introduced to improve such services, but there is still a fundamental limit to the QoS that can be supported on the existing Internet. A more forward-looking approach is to build new high speed networks for the next generation of the Internet. The International Telecommunications Union Telecommunication Standardisation Section (ITU-T) is responsible for defining the concept, management, services and technology for B-ISDN. Asynchronous Transfer Mode (ATM) has been adopted by the ITU-T as the technology to support B-ISDN. In simple terms, ATM is a connection-oriented, packet-switching technique that uses short fixed-size packets called cells as its unit of transfer. An ATM cell consists of a 5-byte header field and a 48-byte information field. The size of the cell was chosen to balance the requirements of latency and protocol efficiency. Each cell contains addressing and control information so that switching functions can be directly implemented in the hardware instead of working with cells in the software. This greatly reduces the switching complexity and processing load at the intermediate switching nodes that are essential for high speed networking. In addition, cells provide a powerful mechanism for supporting different service characteristics over a common communication infrastructure. By segmenting traffic into these fixed-size cells, we can multiplex traffic from different services to the same network.

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 31

    In addition, ATM is a connection-oriented technique in which end-to-end paths are established prior to the beginning of the information transfer. The routing of cells is thus greatly simplified because all cells going to the same destination are delivered in sequence and follow the same path. The ATM protocol encompasses features of both the data link layer (layer 2) and the network layer (layer 3) of the OSI model. ATM is an attempt to combine the benefits of both circuit switching and packet switching in an integrated network. However, it is a compromise in that it is not designed specifically for any particular application. We do not (or should not) expect ATM to handle a particular application as efficiently or as cost-effectively as a network specially designed for such a service. Instead, ATM is intended to allow the integration of all these different types of services and yet still provide satisfactory performance.

    The ups and downs of ATM

    The concept of ATM was proposed in the late 1980s. The promised advantages of ATM have attracted much attention from the communication industry. Consequently, the ATM Forum (), a consortium of companies, was formed to accelerate the definition of ATM technology. Its membership is made up of network equipment providers, semiconductor manufacturers, service providers, carriers, as well as end users. The ATM Forum is not a standards body, but its specifications are passed up to ITU-T for approval. The ITU-T standards body fully recognizes the ATM Forum as a credible working group. The ITU-T issued a first series of recommendations specifying the details of ATM for use in B-ISDN in the early 1990s. Much effort has been spent on ATM research, and new standards continued to roll out until 1995. Unfortunately, ATM gradually lost its momentum after 1995. There are several reasons for this:

    1. Many implementation problems, such as buffer dimensioning and bandwidth allocation, arose during the development of services in ATM.

    2. The high cost of ATM desktop interfaces, when compared to competing technologies such as Fast Ethernet, slowed down the general acceptance of ATM.

    3. The aims of ATM were too aggressive and it tried to support too many services. This slowed down the process of defining standards.

    4. Complicated internetworking with existing LANs turned away potential customers.

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 32

    5. The explosive popularity of WWW made the Internet the de facto global network. This made it harder to persuade customers to change to ATM.

    Although ATM lost its position on the battlefield in LAN environments, it continues to enjoy its dominance in the high speed backbone WAN market. Various ATM-based WANs have been set up in the world (predominately in the USA). For instance, most major Telecom companies such as AT&T, MCI etc. have built their own ATM backbone WANs. PCCWHKT IMS also has a 622 Mbps ATM ring network supporting broadband services throughout in some countries. This enables each Netvigator 1.5 M Ultraline subscriber to have 1.5 Mbps dedicated bandwidth to connect to the Internet. The popularity of the WWW has led to increasing demand for supporting real-time applications such as video conferencing over the Internet. The ability of ATM to provide QoS attracted attention from the industry again in 1998. In addition, the wide deployment of ATM WANs provides the necessary large bandwidth (2.4 Gbps ATM is already available) for future bandwidth-greedy Internet services. This has led to the idea of running Internet Protocol on top of the ATM protocol (also known as IIP-over-ATM) that is, using ATM as the underlying network to support Internet service. This issue will be addressed in a later part of this topic. As you have learned, a great deal of research has been carried out on the next generation Internet. One such effort is very-high-performance Backbone Network Service (vBNS) which is currently implemented as an IP-over-ATM network. In addition, there are also growing demands for accessing the Internet through wireless communications. This has motivated the development of WWireless ATM to support multimedia services in wireless situations. Wireless ATM represents an effort to merge different technologies together to generate new solutions for application. Wireless ATM will be covered in other topic.

    Virtual circuit switching

    ATM is basically a virtual circuit switching technology. Virtual circuit switching can be viewed as a mixture of both circuit switching and packet switching. As in packet switching, the message is divided into several packets and, instead of being transmitted to the destination through different routes as in packet switching, all the packets are transmitted through the same route and arrive at the destination in sequence. This is called virtual circuit switching because, from the users point of view, they are transmitting the data on a dedicated circuit. However, this circuit is not private; it can be shared by a number of connections. By using a fixed route, the size of headers can be greatly reduced, as

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 33

    only the virtual circuit needs to be identified, but not the exact address of the destination. Also, with the help of a fixed route between source and destination, the packets can arrive in the same sequence as they were sent from the senders side. Please read the following reading to refresh your memory of different switching technologies circuit switching, packet switching and virtual circuit switching.

    Time division multiplexing v. statistical multiplexing

    An ATM network must be able to transfer multiple types of traffic with diverse traffic characteristics. One critical requirement of ATM is its ability to carry traffic efficiently over high speed (at least 100 Mbps) networks. The underlying multiplexing technique in ATM is statistical multiplexing; this technique enables ATM to transfer variable bit rate (VBR) traffic efficiently over networks. In this section, youll be introduced to the basic concepts related to statistical multiplexing. Suppose we have N traffic sources arriving at an output link, and each source temporarily stores its packets in a buffer. In TDM, each source is assigned a dedicated time slot in a periodic frame to transmit its packets in the buffer. The identity of the packets can then be distinguished by their positions in the frame. TDM is widely used in circuit-switched networks. Although TDM is simple, it may not fully utilize the available link capacity. As shown in Figure 2.1(a), there is no packet from source 2 in the second frame, so that dedicated time slot will be unused. Similarly, the first slot in the third frame is wasted. Thus, we cannot fully utilize the link capacity of the output.

    Walrand and Varaiya (1996) High-performance Communication Networks, Morgan Kaufmann: Section 2.5.2, pp. 6065. This reading reviews the concepts of circuit switching, packet switching and virtual circuit switching.

    READING 2.4

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 34

    Figure 2.1(a): Time slots are unused in time division multiplexing

    Suppose the restriction of dedicated time slots is removed, however, and other sources are allowed to transmit as long as their buffers are non-empty. In Figure 2.1(b), the second packet from source 3 is allowed to transmit after the second packet from source 1. Then no time slot is wasted, resulting in higher utilization of the link capacity. In fact, there are no empty slots as long as the sources have packets to transmit. This is known as sstatistical multiplexing (or asynchronous time division multiplexing) because the slots are no longer used synchronously. This is the reason why the term asynchronous is used in ATM. However, each packet must carry an address to identify itself because its identity can no longer be inferred by its time slot position in a frame.

    Figure 2.1(b): All the time slots are used in statistical multiplexing

    Statistical multiplexing gain

    You can see from this basic description, therefore, how statistical multiplexing can be more bandwidth-efficient than time-division multiplexing. Statistical multiplexing is particularly well suited to applications with bursty traffic characteristics, i.e. traffic with large variations in bit rate. To support several variable-bit-rate (VBR) connections sharing a link without packet loss, the safest way is to provide a bandwidth equal to the sum of their peak bit rate. However, this will be an over-allocation of bandwidth in most cases because it is unlikely that all the traffic will transmit simultaneously at the peak rate. Ideally, we

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 35

    would like to allocate sufficient bandwidth (less than the sum of peak rates) to support multiplexed traffic without losses. In Figure 2.2, you can see three traffic sources multiplexing into an output link. The sum of their peak rates and the sum of their average rates are 5 Mb/s and 2.5 Mb/s. To ensure no packet losses, we can allocate a bandwidth of 5 Mb/s to the output link. However, since not all three traffic sources transmit at their peak bit rates simultaneously, a link capacity of 3 Mb/s is enough to serve the multiplexed sources. That is, we have a saving of (5 3) = 2 Mb/s. The ratio of total input capacity to the total output capacity is defined as the multiplexing gain. In this example, the multiplexing gain is (5/3). In general, the multiplexing gain depends on the number of traffic sources and traffic characteristics. For ideal cases, we would like to have the link capacity equal to the sum of average rates of multiplexed traffic. On the other hand, weve lost some simplicity of implementation by using statistical multiplexing. We need to have additional overhead such as packet labels to identify source or destination addresses. This requires more complicated network controls to ensure the correct routing of packets. In addition, the actual savings in bandwidth are highly dependent on the traffic characteristics. If the traffic is very smooth, we will not see many savings in bandwidth. Therefore, we have a trade-off between complexity and multiplexing gains.

    Figure 2.2: Statistical multiplexing

    In any case, you can see now how statistical multiplexing is used in ATM to transfer multiple traffic streams over the networks. The following sections describe some more of the basic concepts behind ATM.

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 36

    Benefits of ATM

    Several benefits of ATM are described in the following reading. Please go through them carefully.

    The layered B-ISDN model for ATM

    You have learned that a layered architecture has the advantage of each layer performing a well-defined function. The layered B-ISDN protocol model for ATM is shown in Figure 2.3, which defines the function of each layer.

    What are the advantages and disadvantages of using statistical multiplexing over time division multiplexing?

    SELF-TEST 2.1

    READING 2.5

    List and explain the advantages of using ATM over existing LANs.

    SELF-TEST 2.2

    The International Engineering Consortium, Asynchronous Transfer Mode (ATM) fundamentals, Web ProForum Tutorial, Section 2, pp. 14.

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 37

    Figure 2.3: The B-ISDN ATM protocol reference model

    This model contains four planes:

    1. a user plane to transport user information;

    2. a control plane mainly containing signalling information;

    3. a management plane to maintain the network and perform operational functions; and

    4. plane management to manage different planes. For each plane, separate layers perform specific functions. Lets look at the lowest three layers: the physical layer, which mainly transports information (bits/cells); the ATM layer, which mainly performs switching/routing and multiplexing; and the ATM adaptation layer (AAL), which adapts service information to the ATM stream. The physical layer is responsible for the correct transmission and reception of bits on the physical medium. Common physical media such as UTP, coaxial cable and optical fibre can support ATM. The ATM layer is fully independent of the physical layer. ATM provides a connection-oriented cell switching service using the VP concept. The ATM layer is primarily responsible for the generation of the cell header and the functions associated with the header, such as the switching and routing of cells, flow control, congestion notification, bit error detection in the header, etc. Some of the main functions of ATM headers will be explained in the next section. The ATM Adaptation Layer (AAL) is an end-to-end protocol that provides the interface between the ATM Layer and higher layer protocols and applications. It is the capabilities of the AAL that make ATM able to support a wide range of services. We shall discuss AAL in a later part of this topic.

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 38

    ATM header structure

    Do you recall that each ATM cell consists of a 5-byte header and 48-byte data? Distinct from TCP/IP, no error protection or flow control is implemented on a link-by-link basis in ATM networks because ATM is assumed to operate over links with very low BER (in the order of 10-9). This significantly reduces the size of ATM headers. Furthermore, to guarantee fast processing in the network, very few functions are installed in ATM headers. The header structure at the UNI (User Network Interface) is shown in Figure 2.4.

    Figure 2.4: ATM header structure at the UNI The component fields in the header and their functions are:

    Generic Flow Control (GFC) implements the flow control mechanism on the user-network interface (UNI).

    Virtual Path Identifier/Virtual Channel Identifier (VPI/VCI) identifies the virtual connection and routes the cell to the correct virtual channel. ATM virtual circuits comprise virtual paths (VP) and virtual channels (VC). A VC is a one-way channel for the transport of cells; a VP is a collection of VCs.

    Payload Type (PT) distinguishes between different types of information in the 48-byte payload field.

    Cell Loss Priority (CLP) differentiates the priority of a cell (CLP = 0 or 1) subject to discarding by the network. Lower priority cells may be marked for discarding during congestion.

    Header Error Control (HEC) protects the header from error.

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 39

    More detailed information concerning ATM headers can be found in the following reading. Keep in mind that the header structure in NNI (Network-Network Interface) is similar to the header structure in UNI except the GFC is missing and the length of VPI is increased to 12 bits.

    The next section explains VP concepts and their advantages.

    Virtual path concepts

    In an ATM network, end-to-end connections are established between end stations before the traffic can start flowing. Routing of cells in the network is performed at every switch for each arriving cell. Two levels of identifiers are used to route cells in ATM networks: Virtual Channel Identifiers (VCI) and Virtual Path Identifiers (VPI). Virtual channel (VC) is a concept used to describe unidirectional transport of ATM cells between two end stations associated by a common unique identifier value, referred to as the VCI. VVirtual path (VP) is a concept used to describe unidirectional transport of cells belonging to virtual channels that are associated by a common identifier value, referred to as the VPI. An analogy of VCI and VPI is the seat number on a flight and the flight number. For example, you are assigned to the seat 20C (VCI) on the flight UA001 (VPI), which goes from Hong Kong to Los Angeles via Tokyo. All the passengers on flight UA001 will get on the plane in Hong Kong and go to Tokyo. Some passengers, however, may go through transit to another flight (a new VPI) and

    Tanenbaum, A (1996) Computer Networks, Prentice Hall: Section 5.6.1, pp. 450452. This reading examines the ATM cell structure and explains the function of different fields in the ATM header.

    READING 2.6a

    What are the disadvantages of using a 5-byte header in an ATM cell?

    SELF-TEST 2.3

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 40

    get new seat numbers (a new VCI) in Tokyo, while some passengers will stay in their seats on flight UA001 to continue their trip to Los Angeles. When the plane arrives in Los Angeles, all passengers leave the plane (VP ends). The flight number of the plane may change to UA311 (a new VP) and start a journey to Boston. VPI/VCI addresses are only meaningful to local connections within the VC or VP. Thus there is no need for global address as there is in TCP/IP. This can greatly reduce the size of VPI/VCI addresses. To extend our airline analogy, the flight number and seat numbers are only meaningful to one particular airline (United Airlines in this case). This virtual path concept is adopted in the ATM standard. The VPI provides an explicit path identification for a cell, while VCI provides explicit circuit identification for a cell. Basically, a virtual path is a bundle of virtual circuits that are switched as a unit by defining one additional layer of multiplexing on a per-cell basis underneath the VCI. A predefined route is provided with each virtual path; thus it is not necessary to rewrite the routing table at call set-up. Therefore, call-by-call processing at switching nodes is reduced, and call set-up delay is decreased. That is, VPs are semi-permanent connections. To take full advantage of the VP concept, each VP is allocated a fixed amount of bandwidth. To establish a connection between two end-stations, the networks simply find a suitable path consisting of a sequence of VPs that has sufficient bandwidth to support the connection. In other words, the VP concept provides a logical link between two nodes, and this creates an overlay network with logical links on top of the physical network. Now why not re-read this paragraph and think of it in terms of the UAL flight to Los Angeles? In short, the VP/VC scheme gives lower node cost but at the expense of higher link cost. This characteristic is valuable to ATM networks because large bandwidth will be available as high capacity optical fibres become more widely used. A transmission path may contain several virtual paths and each virtual path may carry several virtual circuits. The next reading examines the process of connection set-up and routing in ATM using the VP concept.

    Tanenbaum, A (1996) Computer Networks, Prentice Hall: Sections 5.6.25.6.3, pp. 452458.

    READING 2.6b

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 41

    The ATM Adaptation Layer (AAL)

    The AATM adaptation layer (AAL) provides a range of alternative service classes for the transport of the information streams generated by the various higher protocol layers. These higher-level services include user services and control and management functions. Associated with each service class is a different adaptation protocol that converts the source information into streams of 48-byte cells. Recall that an ATM cell consists of a 48-byte payload field and a 5-byte header. The AAL can be further divided into two sub-layers: the convergence sub-layer (CS) and the segmentation and re-assembly sub-layer (SAR). The CS is service dependent; it converts an information stream into packet streams of data according to service types. The SAR performs the necessary segmentation of the source information ready for transfer in the 48-byte cells and re-assembles the source information from cells at the destination. In order for ATM to support many kinds of services with different traffic characteristics and system requirements, it is necessary to adapt the different classes of applications to the ATM layer. This function is performed by the AAL, which is service-dependent. The following four types of AAL are currently recommended by ITU-T in the series I.363.1 I.363.5.

    1. AAL type 1 adaptation for constant bit rate services;

    2. AAL type 2 adaptation for variable bit rate services;

    3. AAL type 3/4 adaptation for data services;

    4. AAL type 5 adaptation for signalling and data services.

    What are the advantages of using the virtual path concept in terms of network management?

    SELF-TEST 2.4

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 42

    The specific functions of each AAL are as follows.

    AAL type 1 supports CBR delay sensitive applications.

    AAL type 2 provides bandwidth-efficient transmission of low-rate, short and variable packets in delay sensitive applications. It supports VBR and CBR. AAL2 also provides for variable payload within cells and across cells.

    The ITU-T has originally recommended AAL types 3 and 4 for connection-oriented and connectionless services, respectively. Since these two supported services are almost the same, the ITU-T has combined them to form the AAL 3/4. AAL3/4 packets are used to carry computer data.

    Since the AAL type 3/4 was originally proposed to support connection-oriented services as well, it has a high overhead. As a result, AAL type 5 is recommended to offer services with less overhead and better error detection. The ITU-T has adopted AAL type 5 for IP traffic and for signalling as well. AAL type 5 has been shown to have a 2030% better throughput than that of AAL type 3/4.

    ATM QoS requirements To provide different services in a single network, the network must know the characteristics and requirements of the services. For example, loss of some information is acceptable in voice transmission because the human ear may still be able to recognize the sounds transmitted. However, in data applications such as file transfers, the loss of a small amount of data may make all the data that is transferred useless. While lost data can be retransmitted for non-real-time applications, retransmission is not feasible for real-time applications. Depending on the type of service, different quality of service (QoS) thresholds must therefore be specified. When the network accepts a connection request, the user and the network agree on a traffic contract for the duration of the connection. With this contract, the network guarantees the requested service demand of the connection as long as the source traffic complies with some specified limits. Accordingly, a traffic contract includes traffic descriptors, service requirements and the conformance definitions. Some of the parameters adopted by ATM Forum include:

    1. Peak Cell Rate (PCR) the inverse of the minimum time between two cell submissions to the network, i.e., a connections maximum transmission rate;

    2. Sustainable Cell Rate (SCR) the long-term average rate of a connection;

    3. Minimum Cell Rate (MCR) the inverse of the maximum time between two cells; that is, the minimum usable bandwidth for a connection;

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 43

    4. Burst Tolerance (BT) the duration in which the source is allowed to transmit traffic at its peak rate. It defines the Maximum Burst Size (MBS) of the source;

    5. Cell Delay Variation Tolerance (CDVT) the permissible departure from the periodicity of the traffic.

    These parameters are the traffic descriptors to specify the behaviour of the traffic, and they are defined by the Generalized Cell Rate Algorithm (GCRA), which controls the arrival times of cells. There are additional parameters describing the service requirement:

    1. Cell Loss Ratio (CLR) the ratio of the number of lost cells to the total number of cells sent by a user over the lifetime of the connection;

    2. Cell Transfer Delay (CTD) the delay experienced by a cell inside the network including coding and packetization delay, propagation delay, transmission and switching delay, queuing delay, and re-assembly delay, etc.;

    3. Cell Delay Variation (CDV) the variance of the transmission delay of a connection.

    Please note that not all the QoS requirement parameters and traffic descriptors are used to specify the service class. Different service classes may specify only some of the above parameters.

    ATM classes of service

    To support different types of service with diverse QoS requirements, the ATM Forum has defined several service classes in its traffic management specifications, including:

    1. Constant Bit Rate (CBR);

    2. Real-time Variable Bit Rate (rt-VBR);

    3. Non-real-time Variable Bit Rate (nrt-VBR);

    4. Available Bit Rate (ABR);

    5. Unspecified Bit Rate (UBR);

    6. Guaranteed Frame Rate (GFR).

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 44

    Please read Sections 2.12.3 of Traffic Management Specification Ver. 4.1 from the ATM Forum for a detailed explanation of each service class. Pay particular attention to Table 2.1 for the relationship between the service classes and their corresponding traffic and QoS parameters. Some examples of different applications for ATM service classes are also discussed in these assigned readings.

    The very long document from the ATM Forum that the above two extracts are taken from contains the most updated specifications for ATM traffic management. Although the document is very comprehensive, it is too detailed for the purpose of this course, so you should only read the sections specified here.

    ATM Forum (March 1999) Traffic Management Specification Ver. 4.1, Sections 2.12.3, 3.13.2, pp. 48, 12

    READING 2.7

    ATM Forum (March 1999) Informative Appendix IV: Applications Examples for ATM Service Categories, Traffic Management Specification Ver. 4.1, and p. 95

    READING 2.8

    Tanenbaum, A (1996) Computer Networks, Prentice Hall: Sections 5.6.45.6.5, pp. 458463.

    (OOptional) The International Engineering Consortium, Asynchronous Transfer Mode (ATM) Fundamentals, Web ProForum Tutorials: Section 4, pp. 67

    READING 2.6c

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 45

    Traffic management in ATM networks

    An ATM network consists of switches and user equipment connected by high speed links. An ATM switch has a number of input and output lines. When a cell arrives, the switch reads its virtual circuit identifier and sends the cell to the corresponding output link. Each multiplexer in the switch is equipped with a buffer that can store cells when they occasionally arrive faster than they can be transmitted. When the traffic is bursty, the network can take advantage of statistical multiplexing to improve the efficiency. Nevertheless, when the arrival rate at the buffer approaches its service rate, the queue length grows dramatically. The node becomes congested and the buffer starts to overflow (i.e., cells arriving to a full buffer are discarded). Dropped cells are eventually retransmitted by an upstream node (or by the source), causing the traffic load to increase further. As the number of retransmissions increases, more nodes become congested and more cells are dropped. Eventually, the network can reach a catastrophic state in which most of the cells in the network are retransmissions. Hence, resource management and traffic control are required to take full advantage of statistical multiplexing. Traffic control using reactive mechanisms (such as window flow control) has been widely implemented in low-speed packet-switched networks. Window-based mechanisms typically rely on end-to-end exchanges of control messages to regulate traffic flow. However, due to large bandwidth-delay products with high speed links, there are many cells in transit at any time in the network. With such a reactive feedback control, when the control messages reach the traffic sources, it will be too late and the number of lost cells is very large. For instance, a 53-byte cell takes about 3 ms to be transmitted by a 155 Mbps link (a standard ATM link speed) so that a delay of 30 ms corresponds to 104 cell transmission times. Thus, the feedback from the network is usually outdated and any action the source takes may be too late to resolve network congestion. This is an argument for preventive mechanisms that do not rely so heavily on network feedback. It is also important that the congestion control mechanism operates at the speed of communication link. For this reason, computation-intensive control

    Give an example of an application in each service class.

    SELF-TEST 2.5

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 46

    schemes are less desirable than simple schemes that can be easily implemented in high speed hardware. Therefore, a preventive control method has been proposed to prevent congestion by taking appropriate action before the congestion actually occurs. Nevertheless, it is generally agreed that preventive control techniques are not sufficient to eliminate congestion problems in ATM networks, and that when congestion happens, it is necessary to react to the congested state. In addition, preventive controls may be inefficient because bandwidth may be over-allocated. Reactive control techniques are used to initiate recovery from a congested state so that the network will not be stuck with long-term congestion. Accordingly, the congestion control schemes in ATM can be further classified into different layers according to different time scales. This multilevel congestion evaluation and control is motivated by the fact that communication terminals often have traffic states characterized by these levels. Thus, to improve network efficiency, traffic control should be exercised at these levels. The first of the following two readings presents an overview of the traffic management issues and possible solutions to support multimedia over an ATM network. You will find that different levels of management mechanisms are used to control the traffic. The next reading, which is optional, explains the different levels of management and control in ATM networks.

    Zheng, B and Atiquzzaman, M (1999) Traffic management of multimedia over ATM networks, IEEE Communication Magazine, 37(1) (Jan.): 3338. (Optional) The International Engineering Consortium, Asynchronous Transfer Mode (ATM) Fundamentals, Web ProForum Tutorials, Section 9, pp. 1415

    READING 2.9

    Please briefly describe three different traffic management strategies in ATM networks.

    ACTIVITY 2.2

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 47

    In the previous discussion, we suggested that congestion prevention policies are required in high speed networks. Two common preventive control mechanisms are call admission control and traffic shaping.

    Call admission control In ATM networks, the bandwidth of VC is not clearly defined since all information is segmented into cells and the required number of cells is generated and conveyed through the networks. As the number of VCs increases in a VP, more cells are in the network and the chance of congestion increases. Then the QoS may deteriorate, e.g. the cell loss ratio increases and cell delay increases. Thus it is impossible to accept an unlimited number of VCs while guaranteeing the QoS. Call admission control (CAC) is defined as the set of actions taken by the network at the call set-up phase (or during the call re-negotiation phase) to determine whether a connection can be accepted or rejected. A call is accepted if the network has sufficient resources to provide the QoS requirements of this connection without affecting the QoS provided to existing connections. Consequently, there are two major functions of the CAC mechanism:

    1. Bandwidth allocation determines the amount of bandwidth required by a new connection.

    2. Performance monitoring ensures that the QoS required by existing connections is not affected when multiplexed together with this new connection.

    Traffic shaping One major advantage of ATM is the statistical multiplexing gain of bursty traffic. Now, suppose we have a source sending 100 Mbps for 1 second and remaining silent for 99 seconds. Thus, the average rate of the source is only 1 Mbps over a period of 100 seconds. Since 100 Mbits of information can be dumped into the network during the 1 second period, this highly bursty traffic source can jeopardize the whole network. The situation is similar to sending thousands of email bombs to an email server at the same time. As a result, a control mechanism is introduced to force the traffic to be transmitted at a more predictable rate. This preventive mechanism is known as ttraffic shaping. Traffic shaping is to regulate the average transmission rate of data by smoothing out bursty traffic. For instance, the smoothed traffic will only transmit 10 Mbps over a period of 10 seconds, which can be easily handled by the networks. One traffic shaping algorithm commonly used in ATM is the leaky bucket. In the

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 48

    following reading, you will learn the basic concepts related to the leaky bucket and token bucket algorithms.

    Integration of IP and ATM

    ATM is a new technology that can be used to support B-ISDN. ATMs major advantages include its scalability of bandwidth and its capability of supporting multi-type traffic with a QoS guarantee. In principle, therefore, ATM could function as a universal network. However, the Internet is widely accepted as the universal communication solution today. It is therefore unlikely that ATM will replace all the existing networks with the dominance of IP. This has led to a hot debate between IP and ATM in the past few years. The growing demands for QoS support on the Internet have changed the situation. Instead, the industry has taken a more realistic approach: integrating IP and ATM to get the speed of ATM and the flexibility of IP. This is the basic concept of IP over ATM, which is discussed in the first of the following two readings. You can then take a look at Dahods paper for an overview of the issues of IP v. ATM.

    Tanenbaum, A (1996) Computer Networks, Prentice Hall: Section 5.3.3, pp. 379384.

    READING 2.10

    Please briefly describe two major differences between the leaky bucket and the token bucket.

    SELF-CHECK 2.6

    Reading 2.11: White, P (1998) ATM switching and IP routing integration: the next stage in Internet solution? IEEE Communications Magazine, 36(4) (April): 7983.

    READING 2.11

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 49

    ATM v. Gigabit Ethernet

    Earlier in the topic it was suggested that both ATM and Gigabit Ethernet could offer a bandwidth of 1 Gbps. So now that youve learned something about both of these technologies, how does Gigabit Ethernet compare to ATM? Well, we can say that Gigabit Ethernet and ATM are complementary technologies. Ethernet is the most popular and dominant LAN technology. The introduction of Gigabit Ethernet is likely to simply extend this dominance. Gigabit Ethernets backward compatibility with the existing 802.3 Ethernet is likely to be the key to its success. We shall see many more companies using Gigabit Ethernet as the backbone network. On the other hand, ATM, as a technology that supports B-ISDN, has its own advantages. ATM is ideal for use in wide area network (WAN) connections where the need for support of integrated services and real-time applications is especially strong. For instance, ATM is widely deployed as the high speed WAN to interconnect cities in the USA. The ability to provide QoS guarantees is another crucial feature missing in traditional Ethernet networks. ATM can also be used within LANs where integration to an ATM WAN is important. You should now read the following articles which address the question of ATM v. Gigabit Ethernet.

    Reading 2.12: Dahod, A (1999) Viewpoint: ATM v. Internet Protocol, IEEE Spectrum, 36(1) (Jan.): 3031

    READING 2.12

    1. What are the advantages of IP over ATM?

    2. What is LAN emulation?

    3. Please identify two non-classical approaches to support IP over ATM.

    ACTIVITY 2.3

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 50

    Competing technologies IP over SONET

    So far, we have suggested that ATM is an ideal solution for future broadband service. The integration of IP with ATM is a promising evolutionary development for the next generation of the Internet. However, there are also disadvantages with using ATM as a transport technology. Recall that each 53 byte ATM cell has a 5 byte head; this means ATM has a 10% (= 5/53) overhead. In other words, you can only have a maximum 90% efficiency rate with ATM. Is it possible to improve this efficiency? Well, the answer is yes. At present, most network structures are as follows: IP packets transport on top of ATM and ATM in turn run over SONET. If we remove the ATM layer completely and run IP directly over SONET, we can eliminate the 10% loss due to ATM header overhead. This is the basic concept of IP over SONET. In the next reading, an overview of IP over SONET technology is provided. This reading gives you another high speed backbone solution. You may wonder: should we use ATM or not? Well, its hard to say. Similar to many emerging technologies in the past, a sound technology does not mean that it is marketable. Thus, only time can tell which technology will survive.

    ATM for residential broadband

    ATM was first designed to operate over optical fibre as a backbone network. With increasing demand for multimedia applications, home users are requiring more bandwidth. This motivates the extension of broadband ATM service to residential users.

    Rauch, P (1999) Viewpoint: ATM vs. Gigabit Ethernet, IEEE Spectrum, 36(1) (Jan.): 3233

    READING 2.13

    Manchester, J et al. (1998) IP over SONET, IEEE Communications Magazine, 36(5) (May): 136142.

    READING 2.14

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 51

    New broadband-access copper technologies such as Asynchronous Digital Subscriber Lines (ADSL) have been introduced, which allows maximum 8 Mbps downlink and 640 kbps uplink transmission using ordinary telephone lines. Running ATM on top of ADSL networks (ATM over ADSL), IMS can provide high speed connection to residential users. The details of ADSL will be discussed in other topic. The following article describes how IP/ATM integrated services can be offered over broadband access copper technologies (xDSL). Please go through this article quickly; your goal should be to simply gain a basic grasp of xDSL concepts. You will encounter more detailed information regarding xDSL in other topic.

    (Optional) Further information

    The ATM Forum is the main organization that is pushing the advancement of ATM. The following article describes some of the key achievements of the ATM Forum, and should help you to better understand the development of the ATM standard so far. You are also encouraged to visit the Interoperability Lab website. Their homepages contain several tutorials on new technologies in networking. Interested students can visit the site to get more detailed information. In addition, this site has links to other useful websites. Readings (optional) Dobrowski, G (1998) The ATM Forum: developing implementation agreements, IEEE Communication Magazine, 36(9) (Sept): 121125. Tutorials and Resources, University of New Hampshire, Interoperability Lab

    Azcorra, A et al. (1999) IP/ATM integrated services over broadband access copper technologies, IEEE Communications Magazine, 37(5) (May): 9096.

    READING 2.15

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 52

    This topic discussed several high speed networks. We started with Gigabit Ethernet, which is the extension of the popular 802.3 Ethernet network. Its backward compatibility with 802.3 Ethernet is likely to ensure the quick acceptance of Gigabit Ethernet in the LAN market. We then focused our study on a new technology ATM, which is the supporting technology for B-ISDN. ATM is a connection-oriented, packet-switching technique that uses short fixed-size packets called cells to transfer information. The layered B-ISDN model for ATM was then briefly described, as were the functions of different fields in the ATM header and the various types of AAL to adapt the higher-layer services to services supported by the ATM layer. ATM can support multi-type traffic with the QoS guarantee that is essential for B-ISDN. Different traffic classes can be supported in ATM networks, and they were specified in terms of traffic parameters. Traffic management is essential to high speed networking. Various control mechanisms to support multimedia were briefly discussed. We have also briefly introduced the basic concepts of call admission control and traffic shaping. The purpose of a computer network is interconnecting different nodes. We described several approaches to support IP over ATM. We made a simple comparison between Gigabit Ethernet and the ATM network. Then, we described a competing technology IP over SONET. Finally, we discussed the issue of providing ATM service to home users with new xDSL technology. More information on the residential broadband services will be described in other topic.

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 53

    Asynchronous Transfer Mode (ATM)

    A connection-oriented, packet-switching technique that uses short fixed-size packets called cells.

    ATM Adaptation Layer (AAL)

    The AAL converts the higher layer information stream into a stream of 48-byte data cells.

    Bandwidth delay product maximum

    The product of bandwidth and delay. In the (BDP) data link layer, BDP represents theamount of allowed unacknowledged data inside a network.

    Broadband Integrated Service Digital Network (B-ISDN)

    A single high speed digital network to support multi-type traffic such as voice, video, image and data.

    Call admission control The set of actions taken by the network at the call set-up phase (or during the call re-negotiation phase) to determine whether a connection can be accepted or rejected.

    Gigabit Ethernet IEEE 802.3z 1 Gbps version of the 802.3 Ethernet

    IP-over-ATM Transport of IP packets over ATM networks.

    Multiplexing gain The ratio of total input capacity to total output capacity.

    Quality of Service (QoS) The service level required by an application, generally measured in terms of delays, bandwidth and packet loss.

    Statistical multiplexing The multiplexing technique that allows several variable-bit-rate (VBR) connections to share a link with a capacity less than the sum of their peak bit rate requirements.

    Traffic shaping The mechanism to regulate the average rate of data transmission.

    Virtual channel A concept used to describe unidirectional transport of ATM cells between two end stations associated by a common unique identifier value, VCI.

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 54

    Virtual path A concept used to describe unidirectional transport of cells belonging to virtual channels that are associated by a common identifier value, VPI.

    Wireless ATM The extension of ATM technology in wireless communication.

    xDSL Different digital subscriber lines across technologies.

    SOLUTIONS TO SELF-TEST QUESTIONS

    Self-test 2.1

    Advantages of statistical multiplexing over time-division multiplexing:

    When the traffic is bursty, statistical multiplexing is more bandwidth-efficient. The multiplexing gain of statistical multiplexing can be much larger than that of time-division multiplexing.

    Disadvantages of statistical multiplexing over time-division multiplexing:

    It is more complicated to implement the multiplexer and demultiplexer.

    There is additional overhead to identify the source.

    When the traffic is smooth (e.g. constant bit rate), time-division multiplexing can be more bandwidth efficient.

    Self-test 2.2

    Advantages of ATM:

    High performance via hardware switching the small header and fixed packet size reduce the processing delay.

    Dynamic bandwidth for bursty traffic statistical multiplexing of bursty traffic reduces the required bandwidth.

    Class-of service support for multimedia supports multiple service classes with different QoS.

    Scalability in speed and network size the cell switching technique is flexible enough to allow changes in speed and network size.

  • TOPIC 2 BROADBAND COMMUNICATIONS (PART I) 55

    Common LAN/WAN architecture the same ATM network architecture can be used in both LAN and WAN environments.

    Opportunities for simplification the use of VP/VC connections simplifies the traffic management, security and connection set-up.

    International standards vendors can produce standardized products according to ITU-Ts specifications.

    Self-test 2.3

    Disadvantages for using 5-byte header in an ATM cell:

    Limited functions can be implemented in ATM header; e.g. no error protection or flow control on a link-by-link basis.

    Large overhead because of the space in each cell used for ATM header. Thus the maximum efficiency of bandwidth is only around 90%.

    Self-test 2.4

    Advantages of VP concept:

    Using the VP concept, network resources are semi-permanently allocated. This allows an efficient and simple management of available network resources.

    For instance, if a company wants 10 Mbps bandwidth, you just set up a VP with 10 Mbps bandwidth to this company. This is similar to hiring a circuit.

    To admit a call, we only need to find a suitable VP with sufficient resources to guarantee QoS requirements. This simplifies call admission and call routing.

    If congestion or breakdown occurs, all the VC within a VP can be redirected to another VP connection by simply changing the VPI values.

    Self-test 2.5

    CBR RT-VBR NRT-VBR ABR UBR

    Telephone traffic, CBR video

    Interactive compressed video

    Multimedia email

    File transfer TCP/IP traffic


Recommended