What Is the Network Layer?

The network layer is the third layer in the OSI reference model. It is between the transport layer and the data link layer. It further manages the network on the function of transmitting data frames between two adjacent endpoints provided by the data link layer. In data communication, data is managed to be transmitted from the source end to the destination end through several intermediate nodes, so as to provide the most basic end-to-end data transmission service to the transport layer. The main contents are: virtual circuit packet switching and datagram packet switching, routing algorithm, blocking control method, X.25 protocol, integrated service data network (ISDN), asynchronous transmission mode (ATM), and the principles and implementation of Internet interconnection.

The network layer is the third layer in the OSI reference model. It is between the transport layer and the data link layer. It further manages the network on the function of transmitting data frames between two adjacent endpoints provided by the data link layer. In data communication, data is managed to be transmitted from the source end to the destination end through several intermediate nodes, so as to provide the most basic end-to-end data transmission service to the transport layer. The main contents are: virtual circuit packet switching and datagram packet switching, routing algorithm, blocking control method, X.25 protocol, integrated service data network (ISDN), asynchronous transmission mode (ATM), and the principles and implementation of Internet interconnection.
Chinese name
Network layer
Foreign name
Network Layer
Layers
Third layer in the OSI reference model

Network layer functional purpose

The purpose of the network layer is to achieve the transparent transmission of data between the two end systems. Specific functions include addressing and routing, connection establishment, maintenance, and termination. The services it provides eliminate the need for the transport layer to understand data transmission and exchange technologies in the network. If you want to remember the network layer with as few words as possible, it is "routing, routing, and logical addressing."
Network layer
In order to explain the functions of the network layer, the switching network topology shown in Figure 4.1 is made up of several network nodes connected to each other according to an arbitrary topology. The network layer is related to the operation control of the communication subnet, and it reflects the way that the resource subnet accesses the communication subnet in the network application environment. Physically speaking, the network layer is generally widely distributed and logically complex. Therefore, it is the most complex and critical layer in the next three layers (ie, communication subnets) of data communication in the OSI model.

Network layer seven layer protocol

Network layer application layer

An application that communicates with other computers, which corresponds to the communication service of the application. For example, a word processing program without communication functions cannot execute communication code, and programmers working on word processing do not care about OSI layer 7. However, if an option to transfer files is added, word processor programmers need to implement layer 7 of OSI. Examples: telnet, HTTP, FTP, NFS, SMTP, etc.

Network layer presentation layer

The main function of this layer is to define the data format and encryption. For example, FTP allows you to choose to transfer in binary or ASCII format. If binary is selected, the sender and receiver do not change the contents of the file. If ASCII format is selected, the sender will convert the text from the sender's character set to standard ASCII and send the data. The receiver converts standard ASCII to the character set of the receiver's computer. Examples: encryption, ASCII, etc.

Network layer session layer

It defines how to start, control, and end a session, including the control and management of multiple two-way messages, so that the application can be notified when only part of a continuous message is completed, so that the data seen by the presentation layer is continuous. In some cases, if the presentation layer receives all the data, the data is used to represent the presentation layer. Examples: RPC, SQL, etc.

Network layer transport layer

The functions of this layer include whether to choose an error recovery protocol or an error-free recovery protocol, and to reuse the input of data streams of different applications on the same host, and also include the function of reordering the received packets in the wrong order. Examples: TCP, UDP, SPX.

Network layer

This layer defines the end-to-end packet transmission. It defines the logical addresses that can identify all nodes, and also defines the way the route is implemented and the way it is learned. In order to adapt to the transmission medium whose maximum transmission unit length is smaller than the packet length, the network layer also defines how to segment a packet into smaller packets. Examples: IP, IPX, etc.

Network layer data link layer

It defines how data is transmitted over a single link. These protocols are related to the various media in question. Examples: ATM, FDDI, etc.

Network layer physical layer

The physical layer specifications of OSI are related to the characteristics standards of the transmission medium. These specifications usually also refer to standards developed by other organizations. Connectors, frames, use of frames, current, coding, and optical modulation all belong to various physical layer specifications. The physical layer often uses multiple specifications to define all details. Examples: Rj45, 802.3, etc.

Network layer routing

The communication node sub-network source and destination nodes provide the possibility of multiple transmission paths. Network node is receiving a packet
Network layer
Then, to determine the path to the next node, this is routing. In the datagram mode, the network node has to make a selection for each packet route; in the virtual circuit mode, the route only needs to be determined when the connection is established. The strategy for determining routing is called routing algorithm. There are many technical factors to consider when designing a routing algorithm. The first is the performance index based on the routing algorithm. One is to choose the shortest route, and the other is to choose the optimal route. The second is to consider whether the communication subnet uses virtual circuits or datagrams. That is, each node selects the next route for the arriving packet, or uses a centralized routing algorithm, that is, the central point or the originating node determines the entire route. Fourth, it is necessary to consider network information about network topology, traffic, and delay Source; Finally, determine whether to use a dynamic routing strategy or a static routing strategy.

Network layer static routing

The static routing strategy does not need to measure or use network information. This strategy performs routing according to some fixed rules. There are three algorithms for flooding routing, fixed routing and random routing.
Network layer
(1) Flooding routing method: This is the simplest routing algorithm. After a network node receives a packet from a certain line, it repeatedly sends the received packet to all lines except that line. As a result, the packet or packets that reach the destination node first must have passed the shortest route, and all possible paths have been tried at the same time. This method can be used in situations such as military networks that require high robustness. Even if some network nodes are damaged, as long as there is a channel between the source and destination, the flooding routing can still ensure the reliable transmission of data. In addition, this method can also be used in a broadcast data exchange that transmits a packet from the data source to all other nodes. It can also be used to test the shortest transmission delay of the network. (2) Fixed routing: This is a simple algorithm that is used more often. Each network node stores a table, and each record in the table corresponds to a certain destination node or link. When a packet arrives at a node, the node can find out the corresponding destination node and the next node to be selected from the fixed routing table based on the address information of the packet. The advantage of the fixed routing method is that it is simple and easy to operate, and works well in networks with stable loads and small changes in topology. Its disadvantage is that it is not flexible enough to cope with congestion and failures that occur in the network.
(3) Random routing: In this method, the node that receives the packet randomly selects an outbound node for the packet among all the adjacent nodes. Although the method is simple and reliable, the actual route is not the optimal route, it adds unnecessary burden, and the packet transmission delay is unpredictable, so this method is not widely used.

Network layer dynamic routing

The strategy for node routing that depends on the current state of the network is called a dynamic routing strategy. This strategy can better adapt to changes in network traffic and topology and is beneficial to improving network performance. However, due to the complexity of the algorithm, it will increase the burden on the network, and sometimes it may cause oscillation due to too fast response or slow response. Independent routing, centralized routing and distributed routing are three specific algorithms for dynamic routing strategies.
Network layer
(1) Independent routing: In this type of routing algorithm, nodes only make routing decisions based on relevant information they have searched for, and they do not exchange routing information with other nodes, although they cannot correctly determine routing options that are far away from the node. , But can still better adapt to changes in network traffic and topology. A simple independent routing algorithm is the Hot Potato algorithm proposed by Baran in 1964. When a packet arrives, the node must release it as soon as possible and put it in the shortest direction of the output column, regardless of which direction the direction leads to. (2) Centralized routing: Centralized routing, like fixed routing, stores a routing table on each node. The difference is that the node routing table in the fixed routing algorithm is made by hand, while the node routing table in the centralized routing algorithm is calculated, generated and distributed by the Routing Control Center (RCC) according to the network status. node. Because the RCC uses the information of the entire network, the obtained route selection is perfect, and at the same time, the burden of calculating the route selection of each node is reduced. (3) Distributed routing: For a network using a distributed routing algorithm, all nodes periodically exchange routing information with each of its neighboring nodes. Each node stores a routing table indexed by every other node in the network. Each node in the network occupies an entry in the table, and each entry is divided into two parts, that is, the destination node that you want to use. And the estimated delay or distance to the destination node. Metrics can be milliseconds or link segments, number of packets waiting, remaining lines and capacity, and so on. For the delay, the node can directly send a special packet called "echo", and the node receiving the packet will add the time stamp and send it back as soon as possible, so that the delay can be measured. With the above information, the node can determine the route selection from this.

Network layer blocking control

Congestion refers to the phenomenon that the number of packets reaching a certain part of the communication subnet is too large, which makes the part of the network too late to process, which causes the performance of this part or the entire network to decline. In severe cases, it can even cause the network communication service to stall, that is, to die. Lock phenomenon. This phenomenon is the same as the traffic congestion usually seen in the highway network. When the number of vehicles in the highway network increases on holidays, the traffic flows of various directions
Network layer
Mutual interference causes each vehicle to reach its destination in a relatively increased time (that is, an increase in delay), and sometimes even on a certain section of the road, the vehicle cannot start due to blockage (local deadlock). The relationship between the communication subnet throughput and the communication subnet load is generally shown in Figure 4.3. When the communication subnet load (that is, the number of packets being transmitted on the communication subnet) is relatively small, the throughput of the network (in units of packets per second) is linear with the increase of the network load (which can be represented by the average number of packets in each node). increase. When the network load increases to a certain value, if the network throughput decreases instead, it indicates that a blocking phenomenon has occurred in the network. In a network where congestion occurs, packets arriving at a node will encounter a situation where no buffer is available, so that these packets have to be retransmitted by the previous node, or need to be retransmitted by the source node or the source end system. When the congestion is serious, a considerable amount of transmission capacity and node buffers in the communication subnet are used for such unnecessary retransmissions, thereby reducing the effective throughput of the communication subnet, which leads to a vicious circle and makes the communication subnet Some or even all of them are in a deadlock state, and the effective network throughput is close to zero. In an ideal situation, the utilization rate of the entire network is 100%, but in order to make the network run stably under high load, the queue length of the network node should be controlled to avoid the collapse of traffic due to the infinite growth of the queue. A controlled network can operate stably in a state close to ideal throughput.

Network layer control method

(1) Buffer pre-allocation method: This method is used for packet switched networks using virtual circuits. When the virtual circuit is established, the node through which the call request packet passes is pre-allocated one or more data buffers for the virtual circuit. If a node buffer is full, the call request packet is routed alternately, or a "busy" signal is returned to the caller. In this way, through the permanent buffer opened by each node for each virtual circuit (until the virtual circuit is removed), there will always be room to receive and forward the packets. When a node receives a packet and forwards it, the node returns a confirmation message to the sending node. The confirmation on the one hand means that the receiving node has received the packet correctly, and on the other hand tells the sending node that the node has vacated the buffer. Ready to receive the next packet. The above is the case of the "stop-and-wait" protocol. If the agreement between nodes allows multiple unprocessed packets to exist, in order to completely eliminate the possibility of blocking, each node must reserve an equivalent window for each virtual circuit. Size number of buffers. Regardless of whether there is traffic or not, this method has considerable resources (line capacity or storage space) permanently occupied by a connection. Because dedicated resources are allocated for each connection, it is impossible to use network resources effectively. Packet switching at this time is very similar to circuit switching. (2) Packet discard method: This method does not need to reserve the buffer in advance, but discards the incoming packets when the buffer is full. If the communication subnet provides a datagram service, the packet drop method is used to prevent blocking from occurring without causing a large impact. However, if the communication subnet provides virtual circuit services, a copy of the discarded packet must be saved somewhere so that it can be retransmitted after the blocking is resolved. There are two ways to resolve the retransmission of dropped packets. One is to let the sending node time out and resend the packet until the packet is received. The other is to let the node sending the dropped packet give up after trying a certain number of times. Send, and force the data source node to time out and resume sending. However, it is not appropriate to discard the packets without discrimination, because a packet containing confirmation information can release the node's buffer. If the node does not have a free buffer to receive the packet containing the confirmation information, this will cause the node buffer to be lost once. Chance of release. The method to solve this problem can permanently reserve a buffer for each input link, which is used to accept and detect all incoming packets. For packets carrying acknowledgement information, after using the acknowledgement of the piggybacking to release a buffer , And then discard the packet or save the packet with the good message in the buffer just vacated.
(3) Quota control method: This method directly and strictly limits the number of packets in the communication subnet to prevent blocking from occurring. As can be seen from the relationship curve between network throughput and load in Figure 4.3, in order to avoid congestion, the number of packets being transmitted in the communication subnet can be kept below a certain load value Lc. Therefore, it can be designed that there are Lc special information called "licenses" in the communication subnet. Some of these licenses are allocated to each source node in advance with a certain policy before the communication subnet starts to work, and the other part is in the After the net started to work around the net. When the source node wants to send a packet that has just been sent from the source system, it must first have such a license, and every time a packet is sent, a license must be cancelled. On the destination node side, each time a packet is received and delivered to the destination system, a license is generated. This ensures that the number of packets in a subnet does not exceed the number of licenses.
Network layer

Network layer deadlock

The extreme consequence of blocking is deadlock. Deadlock is one of the prone faults in the network, even when the network is not heavily loaded. When a deadlock occurs, a group of nodes cannot receive and forward packets because there is no free buffer. The nodes wait for each other, that is, they cannot receive or forward packets, and maintain this state permanently, which may even lead to the paralysis of the entire network . At this time, you can only restart the network to remove the deadlock by manual intervention. However, the hidden danger of deadlock is not eliminated after restart, so deadlock may occur again. Deadlocks are caused by certain defects in control technology. The cause is usually elusive and difficult to find, and even if it is found, it often cannot be repaired immediately. Therefore, how to avoid the problem of deadlock must be considered in each layer of the protocol. Figure 4-4 Network layer
Store-and-forward deadlock and its prevention: The most common deadlock is a direct store-and-forward deadlock that occurs between two nodes. At this time, all buffers of node A are used for output to the queue of node B, and all buffers of node B are also used for output to the queue of node A. Node A cannot receive packets from node B, and node B It also cannot receive packets from node A, as shown in Figure 4.4 (a). This situation may also occur between a group of nodes, each node attempts to send packets to neighboring nodes, but each node has no free buffer for receiving packets. This situation is called indirect store-and-forward deadlock. As shown in Figure 4.4 (b). When a node is in a deadlock state, all links connected to it will be completely blocked.
There is a way to prevent store-and-forward deadlocks. Let a communication subnet diameter be M, that is, the maximum number of intermediate link segments from any source point to a destination node is M, and each node needs M + 1 buffers, numbered from 0 to M. For a source node, it is stipulated that a packet from the source system can be received only when its buffer No. 0 is empty, and this packet can only be forwarded to an adjacent node where buffer No. 1 is free, and the node forwards the packet to it The number 2 buffer is free for adjacent nodes .... Finally, the packet either arrives at the destination node successfully and is delivered to the destination system; or it arrives in a buffer with a node number M and can no longer be forwarded. At this time, a loop must occur and the packet should be discarded . Because each group allocates buffers according to a certain sequence rule, that is, the number of buffers occupied by the group is always increasing, which will cause nodes to wait for each other for free buffers and cause a deadlock situation.
Reinstalling deadlocks and preventing them: A more serious situation in deadlocks is deadlocks. Assume that the message sent to an end system is very long and is split into several packets for transmission by the source node. The destination node reassembles all packets with the same message number into a packet and delivers it to the destination system. Because the destination node has limited buffer space for reloading the message, and it cannot know how many packets are being split into the received message, at this time, a serious problem may occur: in order to receive more packets, the The destination node has run out of its buffer space, but it cannot deliver a complete message that has not yet been received and assembled to the destination system. The neighboring node is still sending packets to it, but it cannot receive them.
In this way, after several attempts, the neighboring node will detour to transmit packets to the destination node from other channels, but the destination node has been locked firmly, and the surrounding area has also become blocked. The following methods can be used to avoid reinstallation deadlocks: Allow the destination node to deliver incomplete packets to the destination system. A message that cannot be completely reassembled can be detected, and the source system that sent the message is required to retransmit. Each node is equipped with a backup buffer space. When a reload deadlock occurs, incomplete packets are temporarily moved to the backup buffer. Both methods cannot satisfactorily resolve reload deadlocks because they complicate the protocol in the end system. In general design, the network layer should be transparent to the end system, that is, the end system should not consider things such as message disassembly and assembly. Although the third method does not involve an end system, each point increases overhead and reduces reliability.

X.25 Network layer X.25

CCITT proposed the standard access protocol for Packet-Switched Network (PSN) --- X.25 in 1974, and successively revised it in 1976, 1980, 1984 and 1988. X.25 describes the host (DTE) and packet switched network (PSN)
Network layer
The interface standard between them makes the host need not care about the operation inside the network, so that it can easily achieve access to various networks.

X.25 Network layer X.25 protocol layer

X.25 is actually a set of protocols for the interface between DTE and PSN. The X.25 protocol set includes three layers, namely the physical layer, the data link layer, and the packet layer. (Figure 4.5). As can be seen from Figure 4.5 (a), the three protocol levels of X.25 have only local significance, which is different from the transport layer protocol that operates end to end (mqb ltu 4.5 (b)). The main function of the X.25 packet is equivalent to the third layer in the OSI reference model, that is, the network function is to provide a virtual circuit service of multi-channel channels to the host.

Network layer packet level

The main function of the X.25 packet level is to multiplex one or more physical links connected to the DTE / DCE provided by the data link layer into several logical channels, and perform AND between the virtual circuits established by each logical channel Link layer single link protocols operate similarly to link establishment, data transmission, flow control, sequence and error detection, and link removal. The data sent is in a packet format, and the length of various types of packets and the logical sequence during interaction are strictly specified in the standard. Using the X.25 packet-level protocol, multiple virtual circuit connections can be provided to users at the network layer, enabling users to communicate with several other X.25 data terminal users (DTE) in the public data network at the same time. In X.25, DCE provides DTE with virtual circuit services between local DTEs. There are two types of virtual circuits: one is a virtual call service, that is, a virtual circuit requests DTE to send a call request packet to DCE, and the receiving DCE sends a request The calling DTE sends out a call packet; then the DTE sends out a call acceptance packet. The calling DTE receives the call-connected packet, which is confirmed by the other party, and the virtual circuit is removed. The other is permanent virtual circuits, that is, they are virtual circuits that do not require call establishment and removal process between DTEs designated by negotiation when DTE accesses the X.25 network. Under normal circumstances, the DTE at both ends of the permanent virtual circuit can send and receive data at any time. As described in Section 4.1, each virtual circuit is assigned a virtual circuit number. In X.25, a virtual circuit number consists of a logical channel group number (<15) and a logical channel number (<225), and the virtual circuit numbers at both ends of the virtual circuit are independent of each other. The virtual circuit number is mapped by DCE Go to the virtual circuit. The range of virtual circuit numbers used for virtual calls and permanent virtual circuits should be determined and allocated through consultation with the management department when signing a service.
The public data network has two modes of operation, one is the virtual circuit mode and the other is the datagram mode. Although some other network architectures (such as Ethernet) are still using datagram technology effectively, the datagram service has been removed from the X.25 standard in a 1980 revision and replaced by a method called FastSelect (FastSelect ) Optional expansion to expand services. X.25 virtual circuit service
Network layer
The service belongs to the connection-oriented OSI service mode, which exactly meets the definition of the network system service standard in the OSI reference model, and provides the possibility for the combination of public data networks and OSI. The function of the OSI network layer is to provide repeaters and routing independent of the transport layer and other related functions. In the connection-oriented network layer service, the network layer entity to be communicated must first establish a connection, which is not a corresponding call establishment procedure for establishing a virtual circuit in X.25. The network layer provides network layer services to the transport layer that are independent of routing and repeaters.

X.25 Network layer X.25

At the packet level, all information is transmitted and processed with the packet as the basic unit. Whether it is data to be transmitted between DTEs or control information used by the switching network, it must be expressed in the form of packets and follow the chain. The protocol is transmitted through the DTE \ DCE interface. Therefore, when transmitting on the data link layer, the packet should be embedded in the information field of the information frame (I frame), which is expressed as the following format: tag field F / address field A / control field C / [packet] / frame check Sequence FCS / Tag field F. Each packet is composed of a packet header and data information, and its general format is shown in Figure 4.6. The data part of the packet format (which can be empty) is usually submitted to a higher-level protocol or user program for processing, so it is not further specified in the packet protocol. The packet header is used for network control, and it mainly includes local control information of DTE / DCE. Its length varies with the packet type, but it must include at least the first three bytes. The standard format label, logical channel identifier, and packet type identifier are given separately. , Their meanings are as follows:
(1) General Format Identifier (GFI): It consists of the first 4 bits of the first byte in the packet and is used to indicate the format of the rest of the packet header. The first bit (b8) is called the q bit or limit bit and is used only in data packets. This is for special processing of the data in the packet, and can be used to distinguish whether the data is normal data or control information. For other types of packets, this setting is "0". The second bit (b7) is called the d bit or the transmission confirmation bit. The purpose of setting this bit is to indicate whether the DTE wants to use the packet receiving sequence number P (R) to confirm the data it receives. When the call is established, the D bit can be used between DTEs to determine whether the D bit procedure will be used during the virtual circuit call. The third and fourth bits (b6, b5) are used to indicate whether the sequence number of the data packet is 3 bits, that is, modulo 8 (b6 is set to "1") or 7 bits, that is, modulo 128 (b5 is set to "1"). "10", once selected, the corresponding grouping format also changes.
Network layer
(2) Logical channel identification: the logical channel group number (LCGN) made by the remaining four bits (b4, b3, b2, b1) in the first byte and the logical channel number (LCN) made by the second byte The medium component is used to identify the logical channel. (3) Packet type identification: It consists of the third byte, which is used to distinguish the type and function of the packet. If the last bit (b1) of the byte is "0", it indicates that the packet is a data packet; if the bit is "1", it indicates that the packet is controlled. It can include a call request or an instruction packet and a release request or an instruction packet . If the last three digits of the word (b3, b2, b1) are all "1", it indicates that the packet is a certain acknowledgement or acceptance packet.
The fourth and subsequent bytes will be defined differently depending on the type of packet. The X.25 packet-level protocol specifies multiple types of other groups. Due to the asymmetry between DTE and DCE, the same type of packets with the same type of encoding are used. Different transmission parties have different meanings and interpretations, and their specific implementations are also different. For this reason, the packet protocol from the local DTE packet indicates the command request or response response sent by the local DTE to the remote DTE via DCE; otherwise, the packet from DCE to DCE indicates the command or response response sent by DCE on behalf of the remote DTE to the local DTE. .
The data type encoding part is very similar to the frame format control field C of the data link level except that M bits are used to replace the P / F bits in the I frame. The last bit "0" is the characteristic bit of the data type packet. . The M (More data) bit is "1", indicating that there is subsequent data, that is, the data in the current data packet will continue to be logically based on the data allocated to the next data on the same logical channel. P (S) and P (R) are called the packet transmission sequence number and the reception sequence number, respectively, and they are equivalent to N (S) and N (R) in the frame format. However, their main function is to control the data flow sent to or received from the packet switching network on each logical inbound path, and not only to provide confirmation means between sites. The purpose is to regulate the traffic on each logical channel to Prevent excessive pressure on the packet-switched network. In fact, the value of P (s) or P (R) is used to determine the "window" on a given logical channel, indicating how many unresponded packets are allowed to be transmitted on the channel. The maximum value that can transmit unresponded packets is called the window size W. The window size of each virtual circuit is allocated when the call is established, but the maximum cannot exceed 7 (when the sequence number is 3 bits) or 127 (when the sequence number is 7 bits) Groups.
Like the data link-level frame format, the packet level also includes three types of packets: RR, RNR, and REJ. They are flow control packets. The type field in these packets only includes the connection sequence number P (R), but no transmission sequence number. P (S). RR is used to inform the other party that it is preparing to receive data packets from a given logical channel. RNR can be cleared by RR packets sent in the same direction. As with the data link level frame format, the packet level also includes some unnumbered packets, such as interrupt request packets. Instead of waiting for other packets that have been sent in advance, they can be sent immediately, even when the other party cannot receive data. We can send. The interrupt request packet can only carry one byte of user data and is placed in the reason field to interrupt information or reason to the other party.
X.25 also defines many other types of packets, including release requests / indications, reset requests / indications, restart requests / indications. Except that the reset request / instruction packet has one more diagnostic code, the rest are the same as the interrupt request packet format. Each of these packets includes a "Cause" field to store the reason for the corresponding action. It is necessary to explain the difference between reset and restart. The reset request is set to re-initialize the virtual call or permanent virtual circuit in the data transmission state; and the restart is used to release all the DTE / DCE interfaces at the same time. Virtual call and reset all permanent circuits. The last type of packet contains only three bytes. Packets belonging to this type of format include various acknowledgement packets. They are used to confirm requests or instructions for call, release, interrupt, reset and restart.

ISDN Network layer ISDN

Network layer definition

ISDN is literally an abbreviation of Intergrated Services Digital Network, translated as Integrated Services Digital Network. However, "IS" can also be understood as Standard Interface for all Services; "DN" can be understood as Digital End to End to End connectivity. Modern society needs a social, economic, and fast means of accessing information. ISDN was created under the background of this social need and the rapid development of computer technology, communication technology, and VLSI technology. The ISDN goal is to provide economical, efficient, end-to-end digital connectivity to support a wide range of services, including voice and non-voice services. Users only need to use limited network connections and interface standards to access network information over a large area or even globally.
Network layer

Network layer system structure

The ISDN system structure mainly discusses the interface between the user equipment and the ISDN switching system. An important concept is called digital bit pipe, that is, a pipe that passes a bit stream between user equipment and transmission equipment. Regardless of whether these digital bits come from a digital phone, digital terminal, digital fax machine, or any other device, these bit streams can pass through the pipe in both directions. Digital bit pipes support multiple independent channels with time division multiplexing of the bitstream. The exact format of the bitstream and the reuse of the bitstream are defined in the interface specification of the digital bit pipe. Two bit pipe standards have been defined, one is a low-band standard for homes, and the other is a high-band standard for businesses. The latter can support multiple channels and can be configured with multiple channels if needed. Bit pipes.
Figure 4.11 (a) is the configuration for home or small enterprises and institutions. A network terminal device NT1 and NT1 are set between the user equipment and the ISDN switching system. They are set close to the user equipment, using a telephone line and several kilometers away The switching system is connected. NT1 is equipped with a connector, and the passive bus cable can be inserted into the connector. Up to eight ISDN telephones, terminals or other devices can be connected to the bus cable and connected in the same way as the local area network. From the user's perspective, the interface to the network is a connector on NT1. NT1 not only plays the role of patch panel, it also includes network management, testing, maintenance and performance monitoring. Each device on the passive bus must have a unique address. NT1 also includes contention resolution logic. When several devices access the bus at the same time, NT1 determines which device gets bus access. From the OSI reference model, NT1 is a physical layer device.
For large enterprises and institutions need to use the configuration shown in Figure 4.11 (b), because there are often many telephones at the same time, the bus can not handle. There is an NT2 device in this configuration. In fact, NT2 and NT1 are the CBXs discussed earlier. NT2 and NT1 connect and provide a true interface to various phones, terminals, and other devices. In fact, there is no essential difference between NT2 and ISDN switching systems, only the scale is relatively small. It is only necessary to dial the four-digit extension number for telephone or digital communication within the unit, which has nothing to do with the ISDN switching system. Dial a "9" to connect to the outside line. CBX assigns a channel to the digital channel. CCITT defines four reference points, called R, S, T, and U, as shown in Figure 4.11. The U reference point connects the ISDN switching system and NT1, using a two-wire copper twisted pair, which may be replaced by optical fiber in the future. The T reference point is the connector provided to the user on NT1. The S reference point is the interface between the ISDN CBX and the ISDN terminal. The R reference point is used to connect a terminal adapter to a non-ISDN terminal. The R reference point uses many different interfaces.

Network layer information transfer

Today's people have higher and higher requirements for communication. In addition to the original voice, data, and fax services, they also require comprehensive transmission of broadband services such as high-definition television, broadcast television, and high-speed data fax. With the development of optical fiber transmission, microelectronics technology, broadband communication technology and computer technology, a foundation has been provided to meet these rapidly growing requirements. As early as January 1985, CCITT Study Group 18 established a special
Network layer
The group set out to study Broadband ISDN, and its findings are in the revised I-series recommendations adopted in 1988. The development from narrowband ISDN to wideband ISDN can be divided into three stages. The first stage is to further integrate voice, data and image services. It is a preliminary integrated B-ISDN composed of three independent networks (as shown in Figure 4.12). The broadband switching network composed of ATM realizes the comprehensive transmission of voice, high-speed data and moving images. The main features of the second phase are that B-ISDN and user / network interfaces have been standardized, optical fibers have entered the home, and optical switching technology has been widely used, so it can provide HDTV (High Definition Telecison) including multi-channel Broadband business. The main feature of the third stage is the introduction of an intelligent management network in broadband ISDN. The intelligent network control center manages three basic networks. Intelligent networks can also be called intelligent expert systems.
B-ISDN uses four main transmission modes: high-speed packet switching, high-speed circuit switching, asynchronous transmission mode ATM, and optical switching mode. High-speed packet switching uses the basic technology of packet switching to simplify the X.25 protocol, uses connection-oriented services, has no flow control and error control on the link, and has the advantages of centralized packet switching and synchronous time division switching. The test network has been put into trial operation. High-speed circuit switching is mainly multi-speed time division switching (TDSM). This method allows channels to be allocated in time. Its bandwidth can be an integer multiple of the basic rate. Because this is fast circuit switching, its channel management and control is very complicated. There are many issues that need to be studied and have not yet entered the practical stage. The main equipment of the optical switching technology is the optical switch, which introduces optical technology into the transmission loop and control loop to achieve high-speed transmission and exchange of digital signals. Because optical integrated circuit technology is not yet mature, optical switching technology is not expected to enter the practical stage until the 21st century.

How the network layer works

ATM is characterized by further simplifying network functions. The ATM network does not participate in any data link layer functions. Both the error control and the flow control are left to the terminal. Figure 4.15 is a functional comparison of the three modes of packet switching, frame relay and ATM switching. It can be seen that the switching nodes of the packet-switched network participate in all the functions of the first to third layers of OSI; the frame relay nodes only participate in the core part of the second-layer functions, that is, the frame delimitation and Bit stuffing and CRC checking functions The other functions of the second layer, namely error control and flow control, and the third layer functions are meant to be handled by the terminal; ATM networks are simpler. In addition to the functions of the first layer, the switching nodes Not involved in any work. From the perspective of function distribution, ATM networks and circuit-switched networks have similar characteristics. Therefore, some people say that the ATM network is a network formed by combining the advantages of packet switching and circuit switching, which is very reasonable.
ATM overcomes the shortcomings of other transmission methods, and can adapt to any type of business, regardless of its speed, burst size, real-time requirements and quality requirements, it can provide satisfactory services. CCITT defined ATM in Recommendation I.113: ATM is a conversion mode (ie, the transmission mode described above). In this mode, information is organized into cells, and cells containing a piece of information It does not need to appear periodically. In this sense, this conversion mode is asynchronous. Cells (cells) are actually packets. Only to distinguish them from X.25 packets, ATM information units are called cells. ATM cells have a fixed length, which is always 53 bytes. Among them, 5 bytes are headers, and 48 bytes are information segments, or payloads. The header contains various control information, mainly the logical address indicating the cell's whereabouts, and also some maintenance information, priority, and error correction code of the header. The information segment contains user information from a variety of different services, and this information transparently traverses the network. The format of the cell is not related to the business. The information of any business is also cut and encapsulated into cells of a uniform format.
ATM uses asynchronous time division multiplexing, see Figure 4.16. Cells from different information are brought together and queued in a buffer. The cells in the queue are output to the transmission line one by one, forming a stream of cells connected end to end on the transmission line. The cell's letterhead has an information mark (such as A and B) indicating the address to which the cell is destined. The network transfers the cell based on the mark in the letterhead. Because the information generated by the information source is random, the cells arrive at the queue randomly. High-speed service cells come very frequently and concentrated; low-speed service cells come sparsely. These cells are queued in a queue on a first come, first served basis. The cells in the queue are used on the transmission line in the output order. The cells with the same mark do not correspond to a fixed time interval on the transmission line, and they do not appear on a periodic basis. There is no relationship between the locations, the information is only distinguished by the signs in the letterhead. This multiplexing method is called asynchronous time division multiplex (Asynchronous Time Division Multiplex), also known as statistical multiplexing (Statistic Multiplex). In synchronous time division multiplexing (such as PCM multiplexing), the information is in a frame Time slot (time slot) to distinguish, a time slot corresponds to a channel, no additional information header is required to mark the identity of the information.
Network layer
ATMATM@10-9
ATMATMGbpsATMATMX.25ATMCRCATMATMATM

TCP/IPIPTCP/IPIPIPIPARPRARPICMPIGMP

IP

TCP/IP32IP=
Network layer
Network address + host address. IP addresses are classified by their format. It has four formats: Class A, Class B, Class C, and Class D. As follows
Format number of host addresses: Class A 0 network (7 bits) host address (24 bits),
Class B 10 network (14 bit) host address (16 bit), Class C 110 network (21 bit) host address (8 bit), Class D 1110 multi-channel communication address (28 bit), future format 11110 will be used in the future. In this way, the class A address space is 0-127, the maximum number of networks is 126, and the maximum number of hosts is 16,777,124; the class B address space is 128-191, the maximum number of networks is 16384, and the maximum number of hosts is 65,534; the class C address space is 192 -223, the maximum number of networks is 2,097,152, the maximum number of hosts is 254; the class D address space is 224-254. Overview of Class C address space allocation. Allocation area address space: Multi-area 192.0.0.0 ~ 193.255.255.255, Europe: 194.0.0.0 ~ 195.255.255.255, Others: 196.0.0.0 ~ 197.255.255.255, North America: 197.0.0.0 ~ 199.255.255.255, Central and South America: 200.0. 0.0 ~ 201.255.255.255, Pacific region: 202.0.0.0 ~ 203.255.255.255, others: 204.0.0.0 ~ 205.255.255.255, others: 206.0.0.0 ~ 207.255.255.255. Note: Multi-region indicates the address space that was allocated before the implementation of the plan; Other indicates a geographical division outside the region where the name has been assigned.
Special format IP address: Broadcast address: When each bit of the network or host identifier field is set to 1, this address code identifies that the datagram is a broadcast communication, and the datagram can be sent to all in the network Subnets and hosts. For example, the address 128.2.255.255 means all hosts on the network 128.2. This network address: The host identifier field of the IP address can also be all set to 0, indicating that this address is used as the "this host" address. The network identifier field can also be all set to 0, which means "this network". For example, 128.2.0.0 represents a network with a network address of 128.2. Using an IP address with the network identifier field all set to 0 is useful when a host does not know the IP address of the network. Private IP address: In some cases, an organization does not need to be connected to the Internet or another proprietary network, so there is no need to comply with the requirements for application and registration of IP addresses. The agency can use any address. In RFC1597, some IP addresses are used as private addresses: Class A addresses: 10.0.0.0 to 10.255.255.255. Class B addresses: 172.16.0.0 to 172.31.255.255.255. Class C addresses: 192.168.0.0 to 192.168.255.255.

Network layer address resolution

The ARP protocol is an abbreviation of "AddressResolutionProtocol". In a local area network, what is actually transmitted in the network is a "frame", which contains the MAC address of the target host. In Ethernet, a host needs to know the MAC address of the target host to communicate directly with another host. But how is this target MAC address obtained? It is obtained through the address resolution protocol. The so-called "address resolution" is the process in which the host converts the destination IP address into the destination MAC address before sending the frame. The basic function of the ARP protocol is to query the MAC address of the target device based on the IP address of the target device to ensure smooth communication. The protocol belongs to the link layer protocol. The data frame in Ethernet from one host to another host in the network determines the interface based on the 48-bit Ethernet address (hardware address), not the 32-bit IP address . The kernel (such as the driver) must know the hardware address of the destination to send data. Of course, point-to-point connections do not require the ARP protocol. Data structure of the ARP protocol:
Here is the quoted snippet:
typedefstructarphdr
{
Network layer
unsignedshortarp_hrd; / * hardware type * /
unsignedshortarp_pro; / * protocol type * /
unsignedchararp_hln; / * hardware address length * /
unsignedchararp_pln; / * protocol address length * /
unsignedshortarp_op; / * ARP operation type * /
unsignedchararp_sha [6]; / * Sender's hardware address * /
unsignedlongarp_spa; / * Protocol address of the sender * /
unsignedchararp_tha [6]; / * target hardware address * /
unsignedlongarp_tpa; / * Protocol address of the target * /
} ARPHDR, * PARPHDR; In order to explain the role of the ARP protocol, we must understand the data transmission process on the network. Here is a simple PING example.
Suppose our computer's IP address is 192.168.1.1. To execute this command: ping192.168.1.2. This command will send ICMP packets through the ICMP protocol. This process needs to go through the following steps: 1. The application program constructs a data packet. The example is to generate an ICMP packet and submit it to the kernel (network driver). 2. The kernel checks whether the IP address can be converted into a MAC address. Check the IP-MAC correspondence table in the local ARP cache; 3. If the IP-MAC correspondence exists, skip to step 9; if the IP-MAC correspondence does not exist, continue with the following steps; 4. The kernel performs ARP Broadcast, the destination MAC address is FF-FF-FF-FF-FF-FF, the ARP command type is REQUEST (1), which contains its own MAC address; 5. When the 192.168.1.2 host receives the ARP request , Send an ARP REPLY (2) command, which contains its own MAC address; 6. Get the IP-MAC address correspondence of the 192.168.1.2 host locally and save it in the ARP cache; 7. The kernel will convert the IP into MAC address, then encapsulated in the Ethernet header structure, and then send the data out; use the arp-a command to view the local ARP cache content, so after executing a local PING command, the ARP cache will have a destination IP recorded. Of course, if your data packet is sent to a destination on a different network segment, then there must be a record corresponding to the IP-MAC address of the gateway. Knowing the role of the ARP protocol, we can know very clearly that the outward transmission of data packets depends on the ARP protocol, of course, it also depends on the ARP cache. You know, all operations of the ARP protocol are done automatically by the kernel and have nothing to do with other applications. It should also be noted that the ARP protocol is only used on this network.

Network layer reverse address

When a system with a local disk is booted, the IP address is generally read from a configuration file on the disk. But diskless machines, such as X terminals or diskless workstations, need to use other methods to obtain IP addresses. Each system on the network has a unique hardware address, which is configured by the network interface manufacturer. The diskless system's RARP implementation process reads the unique hardware address from the interface card, and then sends a RARP request (a frame of data broadcast on the network) to request a host to respond to the IP address of the diskless system (in RARP response). This process is conceptually simple, but it is often more difficult to implement than ARP. The official specification of RARP is RFC903 [Finlaysonetal.1984]. RARP packet grid: The format of RARP packets is basically the same as that of ARP packets. The main difference between them is that the frame type code of the RARP request or response is 0x8035, and the operation code of the RARP request is 3, and the response operation code is 4. Corresponding to ARP, RARP request is transmitted by broadcast, and RARP response is generally transmitted by unicast. RARP server design: Although RARP is conceptually simple, the design of a RARP server is system-dependent and complex. Instead, providing an ARP server is simple, and is usually part of the TCP / IP implementation in the kernel. Because the kernel knows the IP address and hardware address, when it receives an ARP request asking for the IP address, it only needs to provide the response with the corresponding hardware address.
RARP server as a user process: The complexity of a RARP server is that the server generally provides a mapping of hardware addresses to IP addresses for multiple hosts (all diskless systems on the network). The mapping is contained in a disk file. Because the kernel generally does not read and analyze disk files, the functionality of the RARP server is provided by the user process, not as part of the kernel's implementation. To complicate matters further, RARP requests are transmitted as a special type of Ethernet data frame. This means that the RARP server must be able to send and receive this type of Ethernet data frame. In Appendix A, we describe that SBD packet filters, SUN's network interface pins, and SVR4 data link provider interfaces can be used to receive these data frames. Because sending and receiving these data frames is system-dependent, the implementation of the RARP server is tied to the system.
Network layer
Each network has multiple RARP servers: One complication of RARP server implementation is that RARP requests are broadcast on the hardware layer, which means that they are not forwarded by routers. In order to enable the diskless system to boot even when the RARP server is turned off, multiple RARP servers are usually provided on a network (such as a cable). As the number of servers increases (to provide redundant backup), network traffic increases as each server sends a RARP response for each RARP request. Diskless systems that send RARP requests generally use the first RARP response received (for ARP, we have never encountered this situation because only one host sends an ARP response). In addition, there is also a possibility that each RARP server responds at the same time, which will increase the probability of Ethernet collision.

Network layer Internet message

The role of ICMP: Due to the two shortcomings of the IP protocol: there is no error control and query mechanism, so ICMP is produced. ICMP is mainly to improve the chance of successful delivery of IP datagrams. Error reporting and querying are performed during the transmission of IP datagrams, such as the destination host or network is unreachable, packets are dropped, routes are blocked, and the destination network is reachable. Wait.
ICMP has two types of messages: error report messages and inquiry messages. Error report message: the destination is unreachable (due to the routing table, hardware failure, protocol unreachable, port unreachable, etc., the router or destination host sends the destination unreachable message to the source station); source station suppression (congestion occurs , Balanced IP protocol does not have the defect of flow control); timeout (loop or time to live is 0); parameter problems (IP datagram header parameters have ambiguity); change routing (routing error or not optimal). Inquiry message: Echo request or reply (used to test connectivity, such as: PING command); timestamp request or answer (used to calculate round-trip time or synchronization time); address mask request or answer (to get mask information ); Routing inquiry or announcement (learning router information on the network). ICMP is a protocol of the Internet (IP) layer. It is used as the data of the IP layer datagram, plus the header of the datagram to form a datagram and send it out.

PING Network layer PING

The PING (PacketInterNetGroper) command at the application layer is used to test the connectivity between the two hosts. PING uses ICMP echo request and echo reply messages, which are ICMP query messages. It is a special case where the application layer directly uses the network layer ICMP. It does not pass TCP or UDP through the transport layer. Protocol field of the IP datagram header: The protocol field of the IP datagram header indicates which protocol is used by this datagram so that the network layer of the destination host can know how to manage the protocol
The Internet Group Management Protocol (IGMP) is used by IP hosts to report their multicast group membership to all directly adjacent multicast routers. This document only describes IGMP applications that determine group membership between hosts and routers. A router that is a member of a multicast group should also behave as a host and even respond to its own queries. IGMP can also be applied between routers, but this application is not described here. Just like ICMP, IGMP is part of IP integration. All hosts wishing to receive IP multicast should implement IGMP. The IGMP message is encapsulated in an IP datagram with an IP protocol number of 2. All IGMP messages described in this document are sent with a TTL of 1 and include a router warning option in their IP header. All IGMP messages that the host cares about have the following format: 8-bit type + 8-bit maximum response time + 16-bit checksum + 32-bit group address.

IGMP Network layer IGMP protocol

Multicast protocols include group member management protocols and multicast routing protocols. The group member management protocol is used to manage the joining and leaving of multicast group members. The multicast routing protocol is responsible for exchanging information between routers to establish a multicast tree. IGMP belongs to the former. It is a protocol used by multicast routers to maintain multicast group membership information. It runs between the host and the multicast router. IGMP information is encapsulated in IP packets, and its IP protocol number is 2.
If a host wants to receive multicast packets sent to a particular group, it needs to listen to all packets destined for that particular group. In order to solve the path selection of multicast data packets on the Internet, a host needs to notify a multicast router on its subnet to join or leave a group, and IGMP is used to accomplish this task in multicast. In this way, the multicast router can know the members of the multicast group on the network and thus decide whether to forward multicast data packets to their network. When a multicast router receives a multicast packet, it checks the multicast destination address of the packet and forwards it only if there are members of that group on the interface.
IGMP provides the information required in the final stage of forwarding multicast packets to the destination, and implements the following two-way functions:
  1. The host informs the router through IGMP that it wants to receive or leave a specific multicast group.
  2. The router periodically queries IGMP to check whether the multicast group members in the LAN are active, and collects and maintains the group membership of the connected network segments.
There are three versions of IGMP, namely IGMP v1, v2, and v3. [1]

IN OTHER LANGUAGES

Was this article helpful? Thanks for the feedback Thanks for the feedback

How can we help? How can we help?