In this chapter, we discuss the concept of computer networks and layered protocols.
The merging of computers and communications has had a profound influence on the way computer systems are organised. With the development of more and more powerful microprocessor chips, there has been a shift in cost effectiveness from large time-shared computer facilities to small but increasingly powerful personal computers and workstations. Thus, the primary growth in computation is in the number of computer systems rather than in the increasing power of a small number of very large computer systems. The old model of a single computer serving all of the organisation’s computational needs has been replaced by one in which a large number of separate but interconnected computers do the job. These systems are called "COMPUTER NETWORKS".
There is no generally accepted taxonomy into which all computer networks fit, but two dimensions stand out as important: transmission technology and scale.
Point-to-point networks consist of many connections between individual pairs of machines. To go from the source to destination, a packet of this type of network may have to first visit one or more intermediate machines. Often multiple routes are possible; hence routing algorithms play an important role here.
In contrast broadcast networks have a single communication channel that is shared by all the machines on the network. Here any machine can send packets and a small address field in the packet specifies for whom it is intended. These systems also allow the possibility of addressing a packet to all destinations. This mode of operation is called ‘broadcasting’. They also support transmission to a subnet of the machines, called ‘multicasting’. Thus broadcast channels are also referred as ‘multicast channels’ or ‘random access channels’.Network Software & Protocol Hierarchies
To reduce their design complexity, networks are organised as a series of layers or levels, each one built upon the one below it. Thus layering or layered architecture, is a form of hierarchical modularity that is central to data network design. Each module performs a given function in support of the overall function of the system. Such a function is often called the service provided by the module. Thus the purpose of each layer is to offer certain services to the higher layers, shielding them from the details of how the offered services are actually implemented.
The rules or conventions used in a conversation between the layer n of one machine with the layer n of another machine, are collectively known as the layer n protocol.
Basically, a protocol is an agreement between the communicating parties on how the communication is to proceed. It is a software module implementing a traditional communication protocol specification (e.g. TCP or IP). Protocols are traditionally composed in graphs, which provide services to applications. The arrangements in which protocols (depicted as nodes) maybe composed to provide services are described (and in some cases constrained) by a protocol graph, while a protocol stack is the actual sequence of protocols through which messages of a particular session pass. The terms, though, are often used interchangeably . As an example, a five-layer network is shown in Fig. 2-2 below.
The entities comprising the corresponding layers on different machines are called peers. In reality, no data is transferred from layer n on one machine to layer n on another machine. Instead, each layer passes data and control information to the layer immediately below it, until the lowest layer is reached. Below layer 1 is the physical medium through which the actual communication occurs.
Between each pair of adjacent layers there is an interface, which defines the primitive operations and services the lower layer offers to the upper one. A set of layers and protocols is called a network architecture.Layer Services
Layers can offer two different types of service to the layers above them: connection-oriented and connectionless.
In contrast, connectionless service is modeled after the postal system. Each message carries the full destination address, and each one is routed through the system independent of all the others. Here if two messages are sent to the same destination, it is possible that the first one sent is delayed so that the second message arrives first. With a connection-oriented service this is impossible.Network Architecture
There are two important network architectures present in the world today, the OSI Reference Model and the TCP/IP Reference Model. The OSI model laid the foundation for the TCP/IP model and hence its study is absolutely essential for a good grasp of the basics of networking & communication.
To make it easier to deal with numerous levels and issues involved in communication, the International Standards Organisation, ISO, developed a reference model that clearly identifies the various levels involved, gives them standard names and points out which level should do what job. This model is called the Open Systems Interconnection Reference Model, usually abbreviated as ISO OSI or sometimes just the OSI model.
In the OSI model, communication is divided up into seven levels or layers, as shown in Fig. 2-3 below. Each layer deals with one specific aspect of the communication. In this way, the problem can be divided up into manageable pieces, each of which can be solved independently of the other ones. Each layer provides an interface to the one above it. Before data can be transmitted, each OSI layer in the sending host computer establishes with the corresponding layer in the receiving host the applicable ground rules for a communication system: type of transmission, computer alphabet to be used, error-checking method and the like. This method is called peer-to-peer communication.
Data to be communicated originates at the sending host computer and passes through each OSI layer on its way to the network. As the information descends through each layer, it undergoes a transformation that prepares it for processing by the next layer. Upon reaching the bottom layer, data is passed to the network as a serial stream of bits represented by changing voltages, microwaves or light pulses. At the receiving host, the stream of bits travels in reverse order through the seven OSI layers. The received data is then displayed on a terminal in its original form. A brief description of the seven layers follows
The networks running the OSI reference model ran into some major problems, a few months after its installation. These networks eventually connected hundreds of universities and government installations using leased telephone lines. When satellite and radio networks were added later, the existing protocols had trouble interworking with them, so a new reference architecture was needed. Thus the ability to connect multiple networks in a seamless way was one of the major design goals from the very beginning. This architecture later became known as the TCP/IP Reference Model.
The TCP/IP model is very similar to the OSI model except for the fact that, it doesn’t contain the Presentation and Session layers, and the Data link and Physical layers of the OSI model are clubbed together to form a single Host-to-Network layer.
This is the second layer in the protocol stack as shown in Fig. 2-3. The customary purpose of the data link layer is to convert the unreliable bit-pipe at layer 1 into a higher-level, virtual communication link for sending packets asynchronously but error-free in both directions over the link. The virtual and actual transmission paths are as shown in Fig. 2-4.
The data link layer accomplishes its objectives by having the sender break the input data into ‘data frames’. The size of the data frames typically ranges from a few hundred to a few thousand bytes, which is decided by the data link layer. It is also up to the data link layer to create and recognise frame boundaries. This is accomplished by attaching special bit patterns to the beginning and end of the frame.
The data link layer has a number of specific functions to carry out. The functions include providing a well defined service interface to the network layer, framing, dealing with transmission errors, regulating the flow of frames, and general link management.
The data link layer can be designed to offer various services to the network layer. The commonly provided services include the following.
When the data link layer accepts a packet from the network layer, it encapsulates the packet in a frame by adding a data link header and trailer to it. Thus a frame consists of an embedded packet and some control (header) information. A frame actually consists of four fields: kind, seq, ack and info, the first three of which contain control information and the last of which may contain actual data to be transferred. The control fields are collectively called the frame header. The kind field specifies whether the frame is a data frame (containing data to be transferred) or is simply a control frame (containing control information only for establishment or release of connection, etc). The seq and ack fields are used for sequence numbers and acknowledgements, respectively. The info field of a data frame contains a single packet, whereas for a control frame, this field is not used.
With this background, we can examine the various protocols that are used in the data link layer. The discussion proceeds with protocols with increasing complexity.
As the name suggests, this protocol supports only ‘simplex’ transmission i.e. data transmission in one direction only. It is the simplest of all protocols implemented at the data link layer. The transmitting and the receiving end are always ready. Processing time can be ignored. Infinite buffer space is available. Moreover, communication is ideal with no damage or loss of data. Thus the protocol imposes the restriction: the receiving network layer should be able to process incoming data infinitely fast or, equivalently, possess infinite buffer space.
The above assumptions make the simplex protocol ‘unrealistic’ to be implemented in practical systems.
This protocol is an improvisation on the simplex protocol discussed earlier, dropping it’s most unrealistic restriction: the ability of the receiving network layer to process incoming data infinitely fast or possessing infinite buffer space. The communication channel is still assumed to be error free and the data traffic is still simplex.
Here the receiver has finite buffer capacity and finite processing speed and hence the protocol must explicitly prevent the sender from flooding the receiver with data faster than can be handled. Incorporating a ‘feedback’ does this. After having passed a packet to its network layer, the receiver sends a little dummy frame back to the sender, which, in effect, gives the sender permission to transmit the next frame. After having sent a frame, the sender is required to wait until the acknowledgement frame arrives. Hence the name ‘stop-and-wait’. Further, the sending data link layer need not inspect the incoming frame (the acknowledgement), as there is only one possibility.
Since frames travel in both directions, the communication channel needs to be capable of handling bi-directional information transfer. Moreover, a strict alternation of flow is implemented – first the sender sends a frame, then the receiver sends a frame, and the process is repeated.
Here a normal communication channel that makes errors is considered. Frames may be either damaged or lost completely. If no errors are detected by the receiver, it responds by sending a positive acknowledgement back to the sender. In this respect, it is similar to the Stop-and-Wait Simplex protocol.
However, if errors are detected at the destination, the receiver simply discards the frame and will not respond. To account for this possibility, the source is equipped with a timer. After a frame is transmitted, the source waits for an acknowledgement (ACK). If no recognisable acknowledgement is received during the timeout period, then the frame is retransmitted. This system therefore requires that the source maintain a copy of a transmitted frame until an ACK is received for that frame.
There is a possibility though that a frame is sent correctly but the acknowledgement is damaged in transit, then the source will timeout and retransmit that frame. The destination will now receive and accept two copies of the same frame. To avoid this problem, the sender puts a sequence number in the header of each frame it sends. Then the receiver can check the sequence number of each arriving frame to see if it is a new frame or a duplicate to be discarded. Since the only ambiguity is between a frame and its immediate predecessor or successor, not between the predecessor and successor themselves, a 1-bit sequence number (0 or 1) is sufficient. When a frame with the correct sequence number arrives, it is accepted, passed to the network layer and the expected sequence number is incremented modulo 2 (i.e.0 becomes 1 & 1 becomes 0).
Such a protocol in which the sender waits for a positive acknowledgement before advancing to the next data item is called PAR (Positive Acknowledgement with Retransmission) or ARQ (Automatic Repeat Request). Although the protocol can handle lost frames, it requires the timeout interval to be long enough to prevent premature timeouts which otherwise could lead to failure of the protocol.
In all the protocols discussed above, data frames were transmitted in one direction only. In most practical situations, there is a need for transmitting data in both directions. In addition to being simplex, we have seen that the PAR protocol can fail under some peculiar conditions involving early timeout. It would be nicer to have a protocol that remained synchronised in the face of any combination of garbled frames, lost frames and premature timeouts. Discussed next are some protocols which are more robust and more practical. These belong to the class of protocols called the Sliding Window Protocols.
Networks can be divided into two categories: those using point-to-point connections and those using broadcast channels.
In any broadcast network, the key issue is how to determine who gets to use the channel when there is competition for it. The layers of the ISO OSI model are not quite appropriate for multi-access media. There is a need for an additional sublayer, often called the ‘Medium Access Control (MAC)’ sublayer, between the data link layer and the modem or physical layer. The purpose of this extra sublayer is to allocate the multi-access channel among the various channels. The MAC sublayer is especially important in LANs, nearly all of which use a multi-access channel as the basis of their communication.
There are, in general, two schemes for allocating a single channel among competing users viz. ‘static channel allocation’ and ‘dynamic channel allocation’.
The traditional way of allocating a single channel among multiple competing users is Frequency Division Multiplexing (FDM). If there are N users, the bandwidth is divided into N equal size portions, each user being assigned one portion. Thus each user has a private frequency band and there is no interference between users. FDM is a simple and efficient allocation mechanism.
However, the potential difficulties with this approach are increased delay and underutilization of the medium. If the spectrum is cut up into N regions, and fewer than N users are interested in communicating, a large piece of valuable spectrum will be wasted. If there are more than N users, some of them will be denied permission, for lack of bandwidth. Thus if some users are quiescent, their bandwidth is simply lost. They are not using it and no one else is allowed to use it either.
The delay can be reduced and channel utilization increased by sharing the medium on a demand basis. There are five key assumptions for all the channel allocation methods under this category:
Some of the algorithms for allocating a multiple access channel areALOHA
This is a random access or contention technique: random access because there is no predictable or scheduled time for any station to transmit; station transmissions are ordered randomly, and contention because the stations contend for time on the medium.
To improve efficiency, a modification of ALOHA called Slotted ALOHA was developed. In this scheme, time on the channel is organised into uniform slots whose size equals the frame transmission time. Transmission is permitted to begin only at a slot boundary. Thus, the frames that do overlap will do so totally. Also the vulnerable period is now halved and the maximum utilisation of the channel is increased to about 37%.
Both ALOHA and slotted ALOHA exhibit poor utilisation and many collisions. Both fail to take advantage of one of the key properties of LANs, which is that the propagation delay between stations is usually very small compared to frame transmission time. In that case, when a station launches a frame, all the other stations know it almost immediately. They would not try transmitting until the first station was done. Collisions would be rare since they would occur only when two stations began to transmit almost simultaneously.
Protocols in which stations listen for a carrier (i.e., a transmission) and act accordingly are called carrier sense protocols. Some common versions of carrier sense protocols will be discussed in the chapters to follow.
In the next chapter we investigate the Sliding Window Protocols and their design in detail.