ch11.5

 

Chapter 11
VLSI FOR TELECOMMUNICATION SYSTEMS



11.5. ATM Networks

11.5.1. Asynchronous Transfer Mode

Before describing the fundamentals of ATM networks, we will define a few concepts such as transfer mode and multiplexing needed to understand the main ATM points.

The concept of transfer mode summarizes two ideas related to information transmission in telecommunication networks: how information is multiplexed, i.e. how different messages share the same communication circuit, and how information is switched, i.e. how the messages are routed to the destination-node.

11.5.1.1. Multiplexing fundamentals

The concept of multiplexing is related to the way in which several communications can share the same transmission medium. As seen in 2.1, different techniques used are time-division multiplexing (TDM) and frequency-division multiplexing (FDM). The former can be synchronous or asynchronous.

In STD (synchronous time-division) multiplexing, a periodic structure divided in time intervals, called frame, is defined and each time interval is assigned to a communication channel. As the number of time intervals in each frame is fixed, each channel has a fixed capacity. The information delay is just function of the distance and the access time because there is no conflict to access the resources (time intervals).

In ATD (asynchronous time-division) multiplexing, the time intervals used in a communication channel are neither inside a frame nor previously assigned. Every time interval can be assigned to every channel. The channel assigned to each information unit has an appropriate label as identifier. With this scheme, every source might transmit information at every time given that there are enough free resources in the network.

11.5.1.2. Switching fundamentals

The switching concept is assigned to the idea of information routing from an origin-node to an end-node. We have already talked about the different switching techniques in 11.4.1-11.4.4.

11.5.1.3. Multiplexing and switching techniques used in ATM networks

ATM networks use ATD (asynchronous time-division) as multiplexing technique and cell switching as switching technique.

With ATD multiplexing, variable binary rate sources can be connected to the network because of the dynamic assignment of time intervals to channels.

Circuit switching is not a suitable technique if variable binary rate sources want to be used because after the connection establishment the binary rate with this switching technique must be constant. This fixed assignment is not just an inefficient usage of available resources but a contradiction to the main goal of B-ISDN (broadband integrated services digital network) where each service has different requirements. ATM networks will be a key element in the development of B-ISDN as stated in the ITU (International Telecommunication Union) recommendation I.121.

Neither general packet switching is a suitable solution in ATM networks because of the difficulty to integrate real-time services. However, as it has the advantage of an efficient resource usage for bursty sources, the switching technique adopted in ATM networks is a variant of this one: cell switching.

Cell Switching works similar than packet switching. The differences between both are the following:

The size of the ATM cell header is 5 octets (approx. 10 % of the total size of the cell). With this small header, fast processing is allowed in the network. The size of the cell payload is 48 octets. This small payload allows low store-and-forward delays in network switching nodes (see figure 11.15).

The decision about the payload size was a trade-off between different proposals. While in conventional data communication it is preferred longer payloads to reduce information overhead, in video communication, more sensitive to delays, smaller ones are desired. The election of the current payload size was a salomonic decision: in Europe, the preferred payload size was 32 octets but in USA and Japan, the preferred load size was 64 octets. Finally, in a meeting hold in Geneva in June 1989, people agreed to have as payload size the average of those two proposals: 48 octets.

11.5.2. ATM network interfaces

In ATM networks, the interface between the network user (either an end-node or a gateway to another network) and the network it is called UNI (User-Network Interface). UNI specifies the possible physical media, cell format, mechanisms to identify different connections established through the same interface, total access rate and mechanisms to define the parameters that determine the quality of service.

The interface between a pair of network nodes is called NNI (Network-Node Interface). This interface is mainly dedicated to routing and switching between nodes. Besides, it is designed to allow interoperability between switching fabrics of different companies.

11.5.3. ATM Cell format

Header format depends on whether or not a cell is at the UNI or the NNI. The functions of each cell header field are the following (Fig 11.15):

Cells can be classified in one of the following types:

 

 

[Click to enlarge image]

Figure-11.15:

11.5.4. Protocol Architecture

The protocol stack architecture used in ATM Networks considers three different planes:

We will describe now the functions of different layers in the user plane of the protocol stack.

11.5.4.1 Physical layer

This is the layer responsible for information transport. It is divided into two sublayers.

The TC sublayer adapts the cells received from the ATM layer to the specific format used in the transmission.

11.5.4.2. ATM layer

This layer provides a connection-oriented service, independently of the transmission media used. Its main functions are the following:

11.5.4.3. AAL (ATM Adaptation Layer)

This layer adapts either, in the transmitter side, the information coming from higher layers to the ATM layer or, in the receiver side, the ATM services to higher level requirements. It is divided into three sublayers:

11.5.5. ATM switching

As cell switching networks, ATM networks require a connection establishment. It is here, at this moment, where the entire communication requirements are specified: bandwidth, delay, information priority and so on. These parameters are defined for each connection and, independently of what is happening in other network points, they determine the connection quality of service (QoS). A connection is established if and only if the network can guarantee the quality demanded by the user without disturbing the quality of already existing connections.

In ATM networks it is possible to distinguish two levels in each virtual connection. Each of them defined with an identifier:

Virtual paths are associated to the highest level of the virtual connection hierarchy. A virtual path is a set of virtual channels connecting ATM switches to ATM switches or ATM switches to end-nodes.

Virtual channels are associated to the lowest level of the virtual connection hierarchy. A virtual channel allows a unidirectional communication between end-nodes, gateways and end-nodes and between LANs (Local Area Networks) and ATM networks. As the provided communication is unidirectional, each full-duplex communication will consist of two virtual channels (each of them with the same path through the network).

Virtual channels and paths can be established dynamically, by signaling protocols, or permanently. Usually, paths are permanent connections while channels are dynamic ones. In an ATM virtual connection, the input cell sequence is always guaranteed at the output.

In ATM Networks, cell routing is achieved thanks to the information pair VPI/VCI. This information is not an explicit address but a label, i.e. Cells do not have in their headers the end-node address but identifiers that change from switch to switch before arriving to the end-node. Switching in a node begins reading the VPI/VCI fields of the input cell header (Empty cells are managed in a special way. After they are identified, they are just dropped at the switch input). This pair of identifiers is used to access the routing table in the switch to obtain, as a result, the output port and a new assigned pair VPI/VCI. Next switch in the path will use this new pair of identifiers in the same way and the procedure will be repeated.

Switches can be of two types:

11.5.6. ATM services

In an ATM network it is possible to negotiate different levels or qualities of service to adapt the network to many applications and to offer to the users a flexible way to access the resources.

If we study the main service characteristics, we can establish a service classification and define different adaptation levels for each service. Four different service class are defined for ATM networks (Table 1)

 

 

BINARY RATE

DELAY

CONNECTION-ORIENTED

APPLICATIONS

A

Constant

Constant

Yes

Telephony, voice

B

Variable

Constant

Yes

Compressed video and voice

C

Variable

Not constant

Yes

Data applications

D

Variable

Not constant

No

LAN interconnections

 

Table-11.1:

Once the different services have been characterized it is possible to define the different adaptation layers. There are four adaptation layers in ATM networks.

11.5.7. Traffic control in ATM networks

The main objective of traffic control function in ATM networks is to guarantee an optimal network performance in the following aspects:

Basically, network traffic control in ATM networks is a preventive approach: it avoids congestion states whose immediate effects are excessive cell dropping and unacceptable end-to-end delays.

Traffic control can be applied from two different sides: on the network side, it incorporates two main functions: Call Acceptance Control (CAC) and Usage Parameter Control (UPC). On the user side, it mainly takes the form of either source rate control or layer source coding (prioritization) to conform to the service contract specification.

11.5.7.1. Call acceptance control

CAC (call acceptance control) is implemented during the call setup to ensure that the admission of a call will not disturb the existing connections and also that enough network resources are available for this call. It is also referred to as call admission control. The CAC results in a service contract.

11.5.7.2. Usage parameter control

UPC (usage parameter control) is performed during a connection life. It is performed to check if the source traffic characteristics respect the service contract specification. If excessive traffic is detected, it can be either immediately discarded or tagged for selective discarding if congestion is encountered in the network. It is also referred to as traffic monitoring, traffic shaping, bandwidth enforcement or cell admission control. The Leaky Bucket (LB) scheme is a widely accepted implementation of an UPC function.


This chapter edited by E. Juarez, L. Cominelli and D. Mlynek
a joint production of

 

 

EJM 17/2/1999