Review Of Literature



In this chapter, we discuss the concept of computer networks and layered protocols.

The merging of computers and communications has had a profound influence on the way computer systems are organised. With the development of more and more powerful microprocessor chips, there has been a shift in cost effectiveness from large time-shared computer facilities to small but increasingly powerful personal computers and workstations. Thus, the primary growth in computation is in the number of computer systems rather than in the increasing power of a small number of very large computer systems. The old model of a single computer serving all of the organisation’s computational needs has been replaced by one in which a large number of separate but interconnected computers do the job. These systems are called "COMPUTER NETWORKS"[2].

There are various reasons for the trend towards computer networks:

There is no generally accepted taxonomy into which all computer networks fit, but two dimensions stand out as important: transmission technology and scale.

Broadly speaking, there are two types of transmission technology:

  1. Point-to-point networks

  2. Broadcast networks.

Point-to-point networks consist of many connections between individual pairs of machines. To go from the source to destination, a packet of this type of network may have to first visit one or more intermediate machines. Often multiple routes are possible; hence routing algorithms play an important role here.

In contrast broadcast networks have a single communication channel that is shared by all the machines on the network. Here any machine can send packets and a small address field in the packet specifies for whom it is intended. These systems also allow the possibility of addressing a packet to all destinations. This mode of operation is called ‘broadcasting’. They also support transmission to a subnet of the machines, called ‘multicasting’. Thus broadcast channels are also referred as ‘multicast channels’ or ‘random access channels’.

Network Software & Protocol Hierarchies

To reduce their design complexity, networks are organised as a series of layers or levels, each one built upon the one below it. Thus layering or layered architecture, is a form of hierarchical modularity that is central to data network design. Each module performs a given function in support of the overall function of the system. Such a function is often called the service provided by the module. Thus the purpose of each layer is to offer certain services to the higher layers, shielding them from the details of how the offered services are actually implemented.

The rules or conventions used in a conversation between the layer n of one machine with the layer n of another machine, are collectively known as the layer n protocol.

Basically, a protocol is an agreement between the communicating parties on how the communication is to proceed. It is a software module implementing a traditional communication protocol specification (e.g. TCP or IP). Protocols are traditionally composed in graphs, which provide services to applications. The arrangements in which protocols (depicted as nodes) maybe composed to provide services are described (and in some cases constrained) by a protocol graph, while a protocol stack is the actual sequence of protocols through which messages of a particular session pass. The terms, though, are often used interchangeably[3] . As an example, a five-layer network is shown in Fig. 2-2 below.

Fig. 2-2 An example of a 5 layer network

The entities comprising the corresponding layers on different machines are called peers. In reality, no data is transferred from layer n on one machine to layer n on another machine. Instead, each layer passes data and control information to the layer immediately below it, until the lowest layer is reached. Below layer 1 is the physical medium through which the actual communication occurs.

Between each pair of adjacent layers there is an interface, which defines the primitive operations and services the lower layer offers to the upper one. A set of layers and protocols is called a network architecture.

Layer Services

Layers can offer two different types of service to the layers above them: connection-oriented and connectionless.

Connection-oriented service is modeled after the telephone system. The service user first establishes a connection, then uses the connection and finally releases the connection. The essential aspect of a connection is that it acts like a tube: the sender pushes objects (bits) in at one end, and the receiver takes them out in the same order at the other end.

In contrast, connectionless service is modeled after the postal system. Each message carries the full destination address, and each one is routed through the system independent of all the others. Here if two messages are sent to the same destination, it is possible that the first one sent is delayed so that the second message arrives first. With a connection-oriented service this is impossible.

Network Architecture

There are two important network architectures present in the world today, the OSI Reference Model and the TCP/IP Reference Model. The OSI model laid the foundation for the TCP/IP model and hence its study is absolutely essential for a good grasp of the basics of networking & communication.

A Seven-Layer Blueprint (The OSI Reference Model)

To make it easier to deal with numerous levels and issues involved in communication, the International Standards Organisation, ISO, developed a reference model that clearly identifies the various levels involved, gives them standard names and points out which level should do what job. This model is called the Open Systems Interconnection Reference Model, usually abbreviated as ISO OSI or sometimes just the OSI model.

In the OSI model, communication is divided up into seven levels or layers, as shown in Fig. 2-3 below. Each layer deals with one specific aspect of the communication. In this way, the problem can be divided up into manageable pieces, each of which can be solved independently of the other ones. Each layer provides an interface to the one above it. Before data can be transmitted, each OSI layer in the sending host computer establishes with the corresponding layer in the receiving host the applicable ground rules for a communication system: type of transmission, computer alphabet to be used, error-checking method and the like. This method is called peer-to-peer communication.

Data to be communicated originates at the sending host computer and passes through each OSI layer on its way to the network. As the information descends through each layer, it undergoes a transformation that prepares it for processing by the next layer. Upon reaching the bottom layer, data is passed to the network as a serial stream of bits represented by changing voltages, microwaves or light pulses. At the receiving host, the stream of bits travels in reverse order through the seven OSI layers. The received data is then displayed on a terminal in its original form[4]. A brief description of the seven layers follows

Fig. 2-3 ISO-OSI Reference Model

  1. Physical Layer : This layer is concerned with transmitting the raw bits over a communication channel. The design issues here largely deal with mechanical, electrical and procedural interfaces, and the physical transmission medium which, lies below the physical layer.
  2. Data Link Layer : The main task of the data link layer is to take a raw transmission facility and transform it into a line that appears free of errors to the network layer. The data link layer design issues include ‘framing’: breaking input data into data frames, ‘flow regulation’: to keep a fast transmitter from drowning a slow receiver, and ‘error control’. A special sub-layer of the data link layer, the Medium Access Sub-layer is used to deal with allocation issue in broadcast networks.
  3. Network Layer : This layer is concerned with controlling the operation of the subnet. A key design issue is determining how packets are routed from the source to destination. If too many packets are present in the subnet at the same time, they will get into each other’s way, forming bottlenecks. The control of such ‘congestion’ is also a concern of the network layer.
  4. Transport Layer : This layer is a true end-to-end layer, from source to destination. Its basic function is to accept data from the session layer, pass it to the network layer and ensure that it arrives correctly at the other end. Flow control plays a key role in this layer too.
  5. Session Layer : A session layer allows users on different machines to establish connections between them. Not only does this layer provide ordinary data transfer, but it also provides some enhanced services viz. ‘dialogue control’, ‘token management’ and ‘synchronization’.
  6. Presentation Layer : This layer, unlike all other layers, is concerned with the syntax and semantics of the information transmitted. A typical example of a presentation service is encoding data in a standard agreed upon way.
  7. Application Layer : This layer is really just a collection of miscellaneous protocols for common activities such as electronic mail, file transfer, and connecting remote terminals to computers over a network.
The TCP/IP Reference Model

The networks running the OSI reference model ran into some major problems, a few months after its installation. These networks eventually connected hundreds of universities and government installations using leased telephone lines. When satellite and radio networks were added later, the existing protocols had trouble interworking with them, so a new reference architecture was needed. Thus the ability to connect multiple networks in a seamless way was one of the major design goals from the very beginning. This architecture later became known as the TCP/IP Reference Model.

The TCP/IP model is very similar to the OSI model except for the fact that, it doesn’t contain the Presentation and Session layers, and the Data link and Physical layers of the OSI model are clubbed together to form a single Host-to-Network layer.

The Data Link Layer

This is the second layer in the protocol stack as shown in Fig. 2-3. The customary purpose of the data link layer is to convert the unreliable bit-pipe at layer 1 into a higher-level, virtual communication link for sending packets asynchronously but error-free in both directions over the link. The virtual and actual transmission paths are as shown in Fig. 2-4.

Fig. 2-4 Virtual and Actual Communication Paths

The data link layer accomplishes its objectives by having the sender break the input data into ‘data frames’. The size of the data frames typically ranges from a few hundred to a few thousand bytes, which is decided by the data link layer. It is also up to the data link layer to create and recognise frame boundaries. This is accomplished by attaching special bit patterns to the beginning and end of the frame.

The data link layer has a number of specific functions to carry out. The functions include providing a well defined service interface to the network layer, framing, dealing with transmission errors, regulating the flow of frames, and general link management.

The data link layer can be designed to offer various services to the network layer. The commonly provided services include the following.

  1. Unacknowledged Connectionless Service:

    • The source machine sends independent frames to the destination machine without having the destination machine acknowledge them.

    • No connection is established beforehand and no connection is released afterward.

    • If a frame is lost due to noise on the channel, its recovery is left to the higher layers in the protocol stack.

    • Due to its unreliability, the service is appropriate when the error rate is very low.

    • It is also appropriate for real-time traffic, such as speech, in which bad data is more preferable than late data.

  2. Acknowledged Connectionless Service:

    • No connections are used.

    • Each frame is acknowledged individually. So the sender knows whether a frame has arrived at the destination or not. This provides Flow Control.

    • This service is useful over unreliable channels, such as wireless systems.

  3. Acknowledged Connection-Oriented Service:
Data Link Protocols

When the data link layer accepts a packet from the network layer, it encapsulates the packet in a frame by adding a data link header and trailer to it. Thus a frame consists of an embedded packet and some control (header) information. A frame actually consists of four fields: kind, seq, ack and info, the first three of which contain control information and the last of which may contain actual data to be transferred. The control fields are collectively called the frame header. The kind field specifies whether the frame is a data frame (containing data to be transferred) or is simply a control frame (containing control information only for establishment or release of connection, etc). The seq and ack fields are used for sequence numbers and acknowledgements, respectively. The info field of a data frame contains a single packet, whereas for a control frame, this field is not used.

With this background, we can examine the various protocols that are used in the data link layer. The discussion proceeds with protocols with increasing complexity.

An Unrestricted Simplex Protocol

As the name suggests, this protocol supports only ‘simplex’ transmission i.e. data transmission in one direction only. It is the simplest of all protocols implemented at the data link layer. The transmitting and the receiving end are always ready. Processing time can be ignored. Infinite buffer space is available. Moreover, communication is ideal with no damage or loss of data. Thus the protocol imposes the restriction: the receiving network layer should be able to process incoming data infinitely fast or, equivalently, possess infinite buffer space.

The above assumptions make the simplex protocol ‘unrealistic’ to be implemented in practical systems.

A Simplex Stop-and-Wait Protocol

This protocol is an improvisation on the simplex protocol discussed earlier, dropping it’s most unrealistic restriction: the ability of the receiving network layer to process incoming data infinitely fast or possessing infinite buffer space. The communication channel is still assumed to be error free and the data traffic is still simplex.

Here the receiver has finite buffer capacity and finite processing speed and hence the protocol must explicitly prevent the sender from flooding the receiver with data faster than can be handled. Incorporating a ‘feedback’ does this. After having passed a packet to its network layer, the receiver sends a little dummy frame back to the sender, which, in effect, gives the sender permission to transmit the next frame. After having sent a frame, the sender is required to wait until the acknowledgement frame arrives. Hence the name ‘stop-and-wait’. Further, the sending data link layer need not inspect the incoming frame (the acknowledgement), as there is only one possibility.

Since frames travel in both directions, the communication channel needs to be capable of handling bi-directional information transfer. Moreover, a strict alternation of flow is implemented – first the sender sends a frame, then the receiver sends a frame, and the process is repeated.

A Simplex Protocol for a Noisy Channel

Here a normal communication channel that makes errors is considered. Frames may be either damaged or lost completely. If no errors are detected by the receiver, it responds by sending a positive acknowledgement back to the sender. In this respect, it is similar to the Stop-and-Wait Simplex protocol.

However, if errors are detected at the destination, the receiver simply discards the frame and will not respond. To account for this possibility, the source is equipped with a timer. After a frame is transmitted, the source waits for an acknowledgement (ACK). If no recognisable acknowledgement is received during the timeout period, then the frame is retransmitted. This system therefore requires that the source maintain a copy of a transmitted frame until an ACK is received for that frame.

There is a possibility though that a frame is sent correctly but the acknowledgement is damaged in transit, then the source will timeout and retransmit that frame. The destination will now receive and accept two copies of the same frame. To avoid this problem, the sender puts a sequence number in the header of each frame it sends. Then the receiver can check the sequence number of each arriving frame to see if it is a new frame or a duplicate to be discarded. Since the only ambiguity is between a frame and its immediate predecessor or successor, not between the predecessor and successor themselves, a 1-bit sequence number (0 or 1) is sufficient. When a frame with the correct sequence number arrives, it is accepted, passed to the network layer and the expected sequence number is incremented modulo 2 (i.e.0 becomes 1 & 1 becomes 0).

Such a protocol in which the sender waits for a positive acknowledgement before advancing to the next data item is called PAR (Positive Acknowledgement with Retransmission) or ARQ (Automatic Repeat Request). Although the protocol can handle lost frames, it requires the timeout interval to be long enough to prevent premature timeouts which otherwise could lead to failure of the protocol.

In all the protocols discussed above, data frames were transmitted in one direction only. In most practical situations, there is a need for transmitting data in both directions. In addition to being simplex, we have seen that the PAR protocol can fail under some peculiar conditions involving early timeout. It would be nicer to have a protocol that remained synchronised in the face of any combination of garbled frames, lost frames and premature timeouts. Discussed next are some protocols which are more robust and more practical. These belong to the class of protocols called the Sliding Window Protocols.

The Medium Access Sublayer

Networks can be divided into two categories: those using point-to-point connections and those using broadcast channels.

In any broadcast network, the key issue is how to determine who gets to use the channel when there is competition for it. The layers of the ISO OSI model are not quite appropriate for multi-access media. There is a need for an additional sublayer, often called the ‘Medium Access Control (MAC)’ sublayer, between the data link layer and the modem or physical layer. The purpose of this extra sublayer is to allocate the multi-access channel among the various channels. The MAC sublayer is especially important in LANs, nearly all of which use a multi-access channel as the basis of their communication.

The Channel Allocation Problem

There are, in general, two schemes for allocating a single channel among competing users viz. ‘static channel allocation’ and ‘dynamic channel allocation’.

Static Channel Allocation in LANs and MANs

The traditional way of allocating a single channel among multiple competing users is Frequency Division Multiplexing (FDM). If there are N users, the bandwidth is divided into N equal size portions, each user being assigned one portion. Thus each user has a private frequency band and there is no interference between users. FDM is a simple and efficient allocation mechanism.

However, the potential difficulties with this approach are increased delay and underutilization of the medium. If the spectrum is cut up into N regions, and fewer than N users are interested in communicating, a large piece of valuable spectrum will be wasted. If there are more than N users, some of them will be denied permission, for lack of bandwidth. Thus if some users are quiescent, their bandwidth is simply lost. They are not using it and no one else is allowed to use it either.

Dynamic Channel Allocation in LANs and MANs

The delay can be reduced and channel utilization increased by sharing the medium on a demand basis. There are five key assumptions for all the channel allocation methods under this category:

  1. Station Model. The model consists of N independent stations (computers, telephones, etc.), each with a program or user that generates frames for transmission.
  2. Single Channel Assumption. All stations can transmit and receive from a single communication channel. The protocol software can assign priorities to the stations.
  3. Collision Assumption. If two frames are transmitted simultaneously, they overlap in time and the resulting signal is garbled. This event is called a collision, which all stations can detect.
    1. Continuous Time. Frame transmission can begin at any instant.

    2. Slotted Time. Time is divided into discrete intervals (slots). Frame transmission always begins at the start of a slot.

    1. Carrier Sense. Stations can sense if the channel is in use. If the channel is sensed as busy, no station will attempt to use it until it goes idle.

    2. No Carrier Sense. Stations cannot sense the channel before trying to use it. The success or failure of transmission is detected later.

Multiple Access Protocols

Some of the algorithms for allocating a multiple access channel are

ALOHA

This is a random access or contention technique: random access because there is no predictable or scheduled time for any station to transmit; station transmissions are ordered randomly, and contention because the stations contend for time on the medium.

ALOHA or Pure ALOHA, as it is sometimes called, is a true free-for-all. Whenever a station has a frame to send, it does so. The station then listens for an amount of time equal to the maximum possible round-trip propagation delay in the network (twice the time it takes to send a frame between the most widely separated stations) plus a small fixed time increment. If the station doesn’t hear an acknowledgement during that time, it resends the frame. If the station fails to receive an acknowledgement after repeated transmissions it gives up. A receiving station determines the correctness of the incoming frame. The frame may be invalid due to noise on the channel or because another station transmitted a frame at about the same time, causing a collision. The receiving station ignores such an invalid frame. If the frame is valid and the destination address in the frame header matches the receiver’s address, the station immediately sends an acknowledgement. Although ALOHA is simple, the number of collisions rises rapidly with increased load, lowering the maximum utilisation to about 18% only.

To improve efficiency, a modification of ALOHA called Slotted ALOHA was developed. In this scheme, time on the channel is organised into uniform slots whose size equals the frame transmission time. Transmission is permitted to begin only at a slot boundary. Thus, the frames that do overlap will do so totally. Also the vulnerable period is now halved and the maximum utilisation of the channel is increased to about 37%.

Both ALOHA and slotted ALOHA exhibit poor utilisation and many collisions. Both fail to take advantage of one of the key properties of LANs, which is that the propagation delay between stations is usually very small compared to frame transmission time. In that case, when a station launches a frame, all the other stations know it almost immediately. They would not try transmitting until the first station was done. Collisions would be rare since they would occur only when two stations began to transmit almost simultaneously.

Protocols in which stations listen for a carrier (i.e., a transmission) and act accordingly are called carrier sense protocols. Some common versions of carrier sense protocols will be discussed in the chapters to follow.


In the next chapter we investigate the Sliding Window Protocols and their design in detail.



Home | Contents | Previous chapter | Return to top of document.