What types of local network technologies are there? Basic technologies of local area networks

Technologies for building local computer networks are changing quite quickly, adapting to the needs of consumers. Now no one wants to wait for hours while their favorite movie is downloaded or a presentation with a lot of photos is transferred. Modern networks make it possible to increase the quality of connections with computers and other devices so that the speed of downloading most materials to the consumer seems to be the same as from a hard drive.

Basic technologies of local networks

Basic technologies for building local networks, also called architectures, can be divided into two generations. The first generation provides low and medium data transfer rates, the second - high.

The first generation of technologies includes those that operate using a cable with a copper core:

  • ARC net (speed up to 2.5 Mbit/s);
  • Ethernet (up to 10 Mbit/s);
  • Token Ring (up to 16 Mbit/s).

The second generation of architectures is based primarily on fiber optic lines, and some variants are built using high-quality copper cable. These include:

  • FDDI (up to 100 Mbit/s);
  • ATM (up to 155 Mbit/s);
  • Fast Ethernet (up to 100 Mbit/s);
  • Gigabit Ethernet (up to 1000 Mbit/s).

Technologies for building local networks

Network technology involves the use of a minimum set of standard protocols and the software and hardware necessary to support them. There are many different protocols, but the most popular are those that develop based on Ethernet, FDDI, Token-Ring, Arcnet.

The most popular is Ethernet technology and its more modern variants. To build it, thin and thick coaxial cable is used, as well as twisted pair, which is easier to install and maintain.

Technology for setting up a local area network

The most common technology these days is the Ethernet architecture, its high-speed variants Fast Ethernet and Gigabit Ethernet are easily combined with each other and with it into a single network, which simplifies scaling tasks. The data transfer speed in such a network depends on the type of cable. Options range from thin coaxial cable to multimode fiber optic cable with light signal speeds of up to 1300 nm.

  • Networks like Arcnet are outdated and provide low speed (2.5 Mbit/s). But they can still be found at a number of enterprises, since they used to be in great demand. This is a very reliable network with low cost adapters and flexibility in configuration. Typically has a bus or passive star topology.
  • The ring-type Token-Ring network itself also goes back into LAN history, but you need to know about it, because it became the basis and prototype of the new generation token network of the FDDI standard.
  • FDDI (Fiber Distributed Data Interface) networks with token access method use fiber optic cable. It is a high-speed architecture that can support up to 1000 subscribers. In this case, the maximum length of the ring cannot be more than 20 kilometers, and the distance between subscribers should be no more than 2 km. These features make it suitable for equipping medium and small enterprises with a small number of jobs.

Local network technology developers

Most technologies for building local networks came to Russia from abroad.

  • The Arcnet standard was developed by Datapoint under the leadership of engineer John Murphy, and was introduced to the public in 1977.
  • The Ethernet standard was introduced by the American company Xerox in 1975; the second generation of the network was developed by DEC, Intel and Xerox, which is why it became known as Ethernet DIX. On its basis, the IEEE 802.3 protocol was developed, which is now used, among other things, for building wireless networks.
  • The Token-Ring standard was created by IBM specifically for the computers it produces. But since there are many devices of different brands on the market, it has not received widespread development.
  • The FDDI standard appeared in the mid-1980s and became the basis for building second-generation networks, although it is based on Token-Ring technology, which uses a token of information to transfer it from computer to computer. The standard was developed by ANSI and immediately supported data transfer rates of 100 Mbps over dual fiber optic cables.
Read our other articles:

In local networks, the main role in organizing the interaction of nodes belongs to the link layer protocol, which is focused on a very specific LCS topology. Thus, the most popular protocol of this level - Ethernet - is designed for a “common bus” topology, when all network nodes are connected in parallel to a common bus for them, and the Token Ring protocol is designed for a “star” topology. In this case, simple structures of cable connections between the PCs of the network are used, and to simplify and reduce the cost of hardware and software solutions, the sharing of cables by all PCs in time-sharing mode (in TDH mode) is implemented. Such simple solutions, characteristic of the developers of the first LCS in the second half of the 70s of the twentieth century, along with positive ones, also had negative consequences, the main of which were limitations on performance and reliability.

Since in a LCN with the simplest topology (“common bus”, “ring”, “star”) there is only one path for transmitting information, the network performance is limited by the throughput of this path, and the reliability of the network is limited by the reliability of the path. Therefore, as the scope of application of local networks developed and expanded with the help of special communication devices (bridges, switches, routers), these restrictions were gradually lifted. Basic LCN configurations (“bus”, “ring”) have turned into elementary links from which more complex local network structures are formed, with parallel and backup paths between nodes.

However, within the basic structures of local networks, the same Ethernet and Token Ring protocols continue to operate. The integration of these structures (segments) into a common, more complex local network is carried out using additional equipment, and the interaction of PCs in such a network is carried out using other protocols.

In the development of local networks, in addition to what has been noted, other trends have emerged:

Refusal of shared data transmission media and transition to the use of active switches, to which PC networks are connected by individual communication lines;

The emergence of a new mode of operation in LCS when using switches - full-duplex (although in the basic structures of local networks, PCs operate in half-duplex mode, since the network adapter of the station at each moment of time either transmits its data or receives others, but does not do this at the same time) . Today, each LCS technology is adapted to operate in both half-duplex and full-duplex modes.

The standardization of LCS protocols was carried out by Committee 802, organized in 1980 at the IEEE Institute. The standards of the IEEE 802.X family cover only the two lower layers of the OSI model - physical and link. It is these levels that reflect the specifics of local networks; senior levels, starting with the network level, have common features for networks of any class.

In local networks, as already noted, the data link layer is divided into two sublevels:

Logical Data Transfer (LLC);

Medium access control (MAC).

The protocols of the MAC and LLC sublayers are mutually independent, i.e., each protocol of the MAC sublayer can work with any protocol of the LLC sublayer, and vice versa.

The MAC sublayer ensures the sharing of a common transmission medium, and the LLC sublayer organizes the transmission of frames with different levels of quality of transport services. Modern LCSs use several MAC sublayer protocols that implement various algorithms for accessing the shared medium and determine the specifics of Ethernet, Fast Ethernet, Gigabit Ethernet, Token Ring, FDDI, 100VG-AnyLAN technologies.

LLC Protocol. For LKS technologies, this protocol ensures the necessary quality of transport service. It occupies a position between network protocols and MAC sublayer protocols. Using the LLC protocol, frames are transmitted either by datagram method or using procedures that establish a connection between interacting network stations and restore frames by retransmitting them if they contain distortions.

There are three operating modes of the LLC protocol:

LLC1 is a connectionless and acknowledgmentless procedure. This is datagram mode of operation. It is usually used when data recovery after errors and data ordering is carried out by higher-level protocols;

LLC2 is a procedure with connection establishment and confirmation. According to this protocol, before the start of transmission, a logical connection is established between interacting PCs and, if necessary, procedures are performed to recover frames after errors and streamline the flow of frames within the established connection (the protocol operates in the sliding window mode used in ARQ networks). The logical channel of the LLC2 protocol is full-duplex, i.e. data can be transmitted simultaneously in both directions;

LLC3 is a procedure without establishing a connection, but with confirmation. This is an additional protocol that is used when time delays (for example, associated with establishing a connection) before sending data are not allowed, but confirmation that the data was received correctly is necessary. The LLC3 protocol is used in networks operating in real time to control industrial facilities.

These three protocols are common to all media access methods defined by IEEE 802.X standards.

Frames of the LLC sublevel according to their purpose are divided into three types - informational (for data transmission), control (for transmission of commands and responses in LLC2 procedures) and unnumbered (for transmission of unnumbered commands and responses LLC1 and LLC2).

All frames have the same format: sender address, recipient address, control field (where information necessary to control the correctness of data transmission is located), data field and two framing one-byte “Flag” fields to define the boundaries of the LLC frame. The data field may be missing in control and unnumbered frames. In information frames, in addition, there is a field to indicate the number of the sent frame, as well as a field to indicate the number of the frame that is sent next.

Ethernet technology (802.3 standard). This is the most common local network standard. More than 5 million LCS are currently operating using this protocol. There are several variants and modifications of Ethernet technology that make up a whole family of technologies. Of these, the most well-known are the 10-megabit version of the IEEE 802.3 standard, as well as the new high-speed Fast Ethernet and Gigabit Ethernet technologies. All these options and modifications differ in the type of physical data transmission medium.

All types of Ethernet standards use the same method of access to the transmission medium - the CSMA/CD random access method. It is used exclusively in networks with a common logical bus, which operates in multiple access mode and is used to transfer data between any two network nodes. This access method is probabilistic in nature: the probability of obtaining a transmission medium at its disposal depends on the network congestion. When the network is heavily loaded, the intensity of collisions increases and its useful throughput drops sharply.

Useful network throughput is the rate of user data carried by the frame data field. It is always less than the nominal bit rate of the Ethernet protocol due to frame overhead, interframe intervals and waiting for access to the medium. When transmitting frames of the minimum length (72 bytes including preamble), the maximum possible throughput of an Ethernet segment is 14880 fps, and the useful throughput is only 5.48 Mbps, which is slightly more than half the nominal throughput - 10 Mbps. When transmitting frames of maximum length (1518 bytes), the useful throughput is 9.76 Mbit/s, which is close to the nominal speed of the protocol. Finally, when using medium-length frames with a data field of 512 bytes, the usable throughput is 9.29 Mbit/s, i.e., also not much different from the maximum throughput of 10 Mbit/s. It should be noted that such speeds are achieved only in the absence of collisions, when two interacting nodes are not interfered with by other nodes. The network utilization coefficient in the absence of collisions and access waiting has a maximum value of 0.96.

Ethernet technology supports 4 different types of frames that have a common address format. Frame type recognition is carried out automatically. As an example, let's take the structure of the 802.3/LLC frame.

Such a frame has the following fields:

Preamble field - consists of seven synchronizing bytes 10101010, which are used to implement Manchester encoding;

Start frame delimiter - consists of a single byte 10101011 and indicates that the next byte is the first byte of the frame header;

Destination address - its length is 6 bytes, it includes signs by which it establishes the type of address - individual (the frame is sent to one PC), group (the frame is sent to a group of PCs), broadcast (for all PCs in the network);

Source (sender) address - its length is 2 or 6 bytes;

Data field length - a 2-byte field defining the length of the data field in the frame;

Data field - its length is from 0 to 1500 bytes. If the length of this field is less than 46 bytes, then a so-called padding field is used to pad the frame to the minimum allowable value of 46 bytes;

Fill field - its length is such as to ensure a minimum length of the data field of 46 bytes (this is necessary for the correct operation of the error detection mechanism). There is no padding field in the frame if the data field is long enough;

Checksum field - consists of 4 bytes and contains a checksum, which is used on the receiving side to detect errors in the received frame.

Depending on the type of physical medium, the IEEE 802.3 standard distinguishes the following specifications:

10Base-5 - thick coaxial cable (diameter 0.5 inches), maximum network segment length 500 meters;

10Base-2 - thin coaxial cable (diameter 0.25 inches), maximum segment length without repeaters 185 meters;

10 Base-T is an unshielded twisted pair cable that forms a hub-based star topology. The distance between the concentrator and the PC is no more than 100 meters;

10Base-F is a fiber optic cable that forms a star topology. The distance between the hub and the PC is up to 1000 m and 2000 m for various options of this specification.

In these specifications, the number 10 denotes the bit rate of data transfer (10 Mbit/s), the word Base is the transmission method at one base frequency of 10 MHz, the last character (5, 2, T, F) is the cable type.

All Ethernet standards have the following characteristics and limitations:

Nominal throughput - 10 Mbit/s;

The maximum number of PCs in the network is 1024;

The maximum distance between nodes in the network is 2500 m;

The maximum number of coaxial network segments is 5;

The maximum segment length is from 100 m (for 10Base-T) to 2000 m (for 10Base-F);

The maximum number of repeaters between any network stations is 4.

Token Ring technology (802.5 standard). This uses a shared environment

data transmission, consisting of cable segments connecting all PC networks into a ring. Deterministic access is applied to the ring (common shared resource), based on the transfer of the right to use the ring to stations in a certain order. This right is conveyed by means of a marker. The token access method guarantees each PC access to the ring within the token rotation time. A priority marker ownership system is used - from 0 (lowest priority) to 7 (highest). The priority for the current frame is determined by the station itself, which can seize the ring if there are no higher priority frames in it.

Token Ring networks use shielded and unshielded twisted pair and fiber optic cable as the physical transmission medium. Networks operate at two bit speeds - 4 and 16 Mbit/s, and in one ring all PCs must operate at the same speed. The maximum length of the ring is 4 km, and the maximum number of PCs in the ring is 260. Restrictions on the maximum length of the ring are related to the time the marker turns around the ring. If there are 260 stations in the ring and the time the marker is held by each station is 10 ms, then the marker, after completing a full rotation, will return to the active monitor in 2.6 s. When transmitting a long message, divided, for example, into 50 frames, this message will be received by the recipient in the best case (when only the sender PC is active) after 260 s, which is not always acceptable for users.

The maximum frame size in the 802.5 standard is not defined. It is usually taken to be 4 KB for 4 Mbit/s networks and 16 KB for 16 Mbit/s networks.

16 Mbit/s networks also use a more efficient ring access algorithm. This is an early token release (ETR) algorithm: a station transmits an access token to the next station immediately after finishing transmitting the last bit of its frame, without waiting for the frame and the occupied token to return around the ring. In this case, frames from several stations will be transmitted simultaneously along the ring, which significantly increases the efficiency of using the ring capacity. Of course, in this case, at any given moment, only the RS that at that moment owns the access token can generate a frame into the ring, and other stations will only relay other people’s frames.

Token Ring technology is significantly more complex than Ethernet technology. It contains fault-tolerance capabilities: due to ring feedback, one of the stations (active monitor) continuously monitors the presence of a token, the turnaround time of the token and data frames, detected errors in the network are eliminated automatically, for example, a lost token can be restored. If the active monitor fails, a new active monitor is selected and the ring initialization procedure is repeated.

The Token Ring standard (the technology of these networks was developed back in 1984 by IBM, which is a trendsetter in this technology) initially provided for the construction of connections in the network using hubs called MAUs, i.e.

E. multiple access devices. A hub can be passive (connects ports with internal connections so that PCs connected to these ports form a ring, and also provides bypass of a port if the computer connected to this port is turned off) or active (performs signal regeneration functions and is therefore sometimes called repeater).

Token Ring networks are characterized by a star-ring topology: PCs are connected to hubs using a star topology, and the hubs themselves are combined through special Ring In (RI) and Ring Out (RO) ports to form a backbone physical ring. The Token Ring network can be built on the basis of several rings, separated by bridges that route frames to the recipient (each frame is equipped with a field with the route of the rings).

Recently, through the efforts of IBM, Token Ring technology has received a new development: a new version of this technology (HSTR) has been proposed, supporting bit rates of 100 and 155 Mbit/s. At the same time, the main features of the 16 Mbit/s Token Ring technology are preserved.

FDDI technology. This is the first LCS technology that uses fiber optic cable for data transmission. It appeared in 1988 and its official name is Fiber Distributed Data Interface (FDDI). Currently, in addition to fiber optic cable, unshielded twisted pair cable is used as a physical medium.

FDDI technology is intended for use on backbone connections between networks, for connecting high-performance servers to a network, in corporate and metropolitan networks. Therefore, it provides high data transfer speed (100 Mbit/s), fault tolerance at the protocol level and long distances between network nodes. All this affected the cost of connecting to the network: this technology turned out to be too expensive for connecting client computers.

There is significant continuity between Token Ring and FDDI technologies. The basic ideas of the Token Ring technology were adopted and improved and developed in the FDDI technology, in particular, the ring topology and the token access method.

Computer networks and network technologies

In the FDDI network, two fiber optic rings are used for data transmission, forming the main and backup transmission paths between PCs. Network stations are connected to both rings. In normal mode, only the main ring is activated. If any part of the main ring fails, it is combined with the backup ring, again forming a single ring (this is the “collapse” mode of the rings) using hubs and network adapters. Having a “collapse” procedure in case of failures is the main way to increase the fault tolerance of a network. There are other procedures for identifying network failures and restoring network functionality.

The main difference between the token method of access to the transmission medium used in the FDDI network and this method in the Token Ring network is that in the FDDI network, the token holding time is a constant value only for synchronous traffic, which is critical to frame transmission delays. For asynchronous traffic, which is not critical to small delays in frame transmission, this time depends on the ring load: with a small load it increases, and with a large load it can decrease to zero. Thus, for asynchronous traffic, the access method is adaptive, well regulating temporary network congestion. There is no frame priority mechanism. It is believed that it is enough to divide the traffic into two classes - synchronous, which is always serviced (even when the ring is overloaded), and asynchronous, serviced when the ring load is low. FDDI stations use an early token release algorithm, as is done in the 16 Mbps Token Ring network. Signal synchronization is ensured by using the NRZI bipolar code.

In an FDDI network, there is no dedicated active monitor, all stations and hubs are equal, and if abnormalities are detected, they reinitialize the network and, if necessary, reconfigure it.

The results of comparing FDDI technology with Ethernet and Token Ring technologies are given in table. 8.


Fast Ethernet and 100VG-AnyLAN technologies. Both of these technologies are not independent standards and are considered as a development and addition to Ethernet technology, implemented in 1995 and 1998, respectively. New technologies Fast Ethernet (802.3i standard) and 100VG-AnyLAN (802.3z standard) have a performance of 100 Mbit/s and are distinguished by the degree of continuity with classic Ethernet.

The 802.3i standard retains the CSMA/CD random access method and thereby ensures continuity and consistency between 10 Mbit/s and 100 Mbit/s networks.

100VG-AnyLAN technology uses a completely new access method - Demand Priority (DP), priority access on demand. This technology is significantly different from Ethernet technology.

Let us note the features of Fast Ethernet technology and its differences from Ethernet technology:

The structure of the physical layer of Fast Ethernet technology is more complex, which is explained by the use of three types of cabling systems: fiber optic cable, twisted pair category 5 (two pairs are used), twisted pair category 3 (four pairs are used). The abandonment of coaxial cable has led to the fact that networks of this technology always have a hierarchical tree structure;

The network diameter is reduced to 200 m, the transmission time of a frame of minimum length is reduced by 10 times due to an increase in transmission speed by 10 times;

Fast Ethernet technology can be used to create long-distance local network backbones, but only in the half-duplex version and in conjunction with switches (the half-duplex mode of operation for this technology is the main one);

For all three physical layer specifications, which differ in the type of cable used, the frame formats are different from the frame formats of 10 Mbit Ethernet technologies;

A sign of a free state of the transmitting medium is not the absence of signals, but the transmission of a special symbol in encoded form through it;

The Manchester code is not used to represent data during cable transmission and ensure signal synchronization. The 4V/5V encoding method is used, which has proven itself in FDDI technology. In accordance with this method, every 4 bits of transmitted data are represented by 5 bits, i.e., out of 32 combinations of 5-bit symbols, only 16 combinations are used to encode the original 4-bit symbols, and from the remaining 16 combinations, several codes are selected and used as service codes. . One of the service codes is constantly transmitted during pauses between frame transmissions. If it is absent from the communication line, this indicates a failure of the physical connection;

Signals are encoded and synchronized using the NRZI bipolar code;

Fast Ethernet technology is designed to use repeater hubs to form connections in the network (the same is true for all non-coaxial Ethernet options).

Features of 100VG-AnyLAN technology are as follows:

Another method of access to the transmission medium is used - Demand Priority, which provides a more efficient distribution of network bandwidth between user requests and supports priority access for synchronous operation. A hub is used as an access arbiter, which cyclically polls workstations. The station, wanting to transmit its frame, sends a special signal to the hub, requesting

frame transmission and indicates its priority. There are two priority levels - low (for normal data) and high (for time-sensitive data, such as multimedia). Request priorities have two components - static and dynamic, so a station with a low priority level that has not had access to the network for a long time receives high priority;

Frames are transmitted only to the destination station, and not to all stations on the network;

Ethernet and Token Ring frame formats are preserved, which facilitates internetworking through bridges and routers;

Several physical layer specifications are supported, including four and two unshielded twisted pairs, two shielded twisted pairs, and two fiber optic cables. If 4 pairs of unshielded cable are used, each pair simultaneously transmits data at 25 Mbps, for a total of 100 Mbps. There are no collisions when transmitting information. To encode data, a 5B/6B code is used, the idea of ​​using which is similar to the 4B/5B code.

100VG-AnyLAN technology is not as widespread as Fast Ethernet. This is explained by the narrow technical capabilities of supporting different types of traffic, as well as the emergence of high-speed Gigabit Ethernet technology.

Gigabit Ethernet technology. The emergence of this technology represents a new step in the hierarchy of Ethernet family networks, providing a transmission speed of 1000 Mbit/s. The standard for this technology was adopted in 1998; it preserves the ideas of classical Ethernet technology as much as possible.

Regarding Gigabit Ethernet technology, the following should be noted:

The following are not supported at the protocol level (just like its predecessors): quality of service, redundant connections, testing the performance of nodes and equipment. As for the quality of service, it is believed that the high speed of data transmission along the backbone and the ability to assign priorities to packets in switches are quite sufficient to ensure the quality of transport service for network users. Support for redundant connections and testing of equipment are carried out by higher-level protocols;

All Ethernet frame formats are preserved;

It is possible to operate in half-duplex and full-duplex modes. The first of them supports the CSMA/CD access method, and the second supports work with switches;

All major types of cables are supported, as in previous technologies of this family: fiber optic, twisted pair, coaxial;

The minimum frame size has been increased from 64 to 512 bytes, the maximum network diameter is the same - 200 m. You can transmit several frames in a row without releasing the medium.

Gigabit Ethernet technology allows you to build large local networks in which servers and backbones at lower levels of the network operate at a speed of 100 Mbit/s, and a 1000 Mbit/s backbone connects them, providing a reserve of bandwidth.

So far, we have considered protocols that operate at the first three levels of the seven-layer OSI reference model and implement the corresponding methods for logical data transfer and access to the transmission medium. These protocols transfer packets between workstations, but do not address issues related to network file systems and file forwarding. These protocols do not include any means of ensuring the correct sequence of receiving transmitted data and no means of identifying application programs that need to exchange data.

Unlike lower-level protocols, upper-level protocols (also called middle-level protocols, since they are implemented at layers 4 and 5 of the OSI model) are used for data exchange. They provide programs with an interface for data transmission using the datagram method, when packets are addressed and transmitted without confirmation of receipt, and the communication session method, when a logical connection is established between interacting stations (source and destination) and message delivery is confirmed.

Upper-level protocols are discussed in detail in the next chapter. Here we will only briefly note the IPX/SPX protocol, which has become widely used in local networks, especially due to the complication of their topology (routing issues are no longer trivial) and the expansion of the services provided. IPX/SPX is a NetWare network protocol, and IPX (Internetwork Packet Exchange) is an internetwork packet exchange protocol, and SPX (Sequenced Packet Exchange) is a sequential packet exchange protocol.

IPX/SPX protocol. This protocol is a subset of the IPX and SPX protocols. Nowell's NetWare network operating system uses the IPX protocol for datagram exchange and the SPX protocol for exchange in communication sessions.

The IPX/SPX protocol is a software-based protocol. It does not work with hardware interrupts using operating system driver functions. The IPX/SPX protocol pair has a fixed header length, which leads to full compatibility between different implementations of these protocols.

The IPX protocol is used by routers in the NetWare network operating system (NOS). It corresponds to the network layer of the OSI model and performs the functions of addressing, routing and forwarding during the transmission of data packets. Despite the lack of guarantees of message delivery (the addressee does not transmit confirmation of receipt of the message to the sender), in 95% of cases retransmission is not required. At the IPX level, service requests are made to file servers. and each such request requires a response from the server. This determines the reliability of the datagram method, since routers perceive the server’s response to a request as a response to a correctly transmitted packet.

Section 16 - Criminal Code of Ukraine Crimes in the field of use of electronic computers (computers), systems and computer networks and telecommunication networks

  • INTRODUCTION………………………………………………………………………………..3

    1 ETHERNET AND FAST ETHERNET NETWORKS……………………………5

    2 TOKEN-RING NETWORK…………………………………………………….9

    3 ARCNET NETWORK……………………………………………………….14

    4 FDDI NETWORK………………………………………………………………………………18

    5 100VG-AnyLAN NETWORK…………………………………………………………….23

    6 ULTRA-SPEED NETWORKS…………………………………………………….25

    7 WIRELESS NETWORKS…………………………………………………………….31

    CONCLUSION…………………………………………………………….36

    LIST OF SOURCES USED………………………39


    INTRODUCTION

    Since the advent of the first local networks, several hundred different network technologies have been developed, but only a few have become noticeably widespread. This is due, first of all, to the high level of standardization of networking principles and their support by well-known companies. However, standard networks do not always have record-breaking characteristics and provide the most optimal exchange modes. But the large production volumes of their equipment and, consequently, its low cost give them enormous advantages. It is also important that software manufacturers also primarily focus on the most common networks. Therefore, a user who chooses standard networks has a full guarantee of compatibility of equipment and programs.

    The purpose of this course work is to consider existing local network technologies, their characteristics and advantages or disadvantages over each other.

    I chose the topic of local network technologies because, in my opinion, this topic is especially relevant now, when mobility, speed and convenience are valued all over the world, with as little time wasted as possible.

    Currently, reducing the number of types of networks used has become a trend. The fact is that increasing the transmission speed in local networks to 100 and even 1000 Mbit/s requires the use of the most advanced technologies and expensive scientific research. Naturally, only the largest companies that support their standard networks and their more advanced varieties can afford this. In addition, a large number of consumers have already installed some kind of network and do not want to immediately and completely replace network equipment. It is unlikely that fundamentally new standards will be adopted in the near future.

    The market offers standard local networks of all possible topologies, so users have a choice. Standard networks provide a wide range of acceptable network sizes, number of subscribers and, last but not least, equipment prices. But making a choice is still not easy. Indeed, unlike software, which is not difficult to replace, hardware usually lasts for many years; its replacement leads not only to significant costs and the need to re-wire cables, but also to a revision of the organization's computer system. In this regard, errors in the choice of equipment are usually much more expensive than errors in the choice of software.

    1 ETHERNET AND FAST ETHERNET NETWORKS

    The most widespread among standard networks is the Ethernet network. It first appeared in 1972 (developed by the famous company Xerox). The network turned out to be quite successful, and as a result, in 1980 it was supported by such major companies as DEC and Intel). Through their efforts, in 1985, the Ethernet network became an international standard; it was adopted by the largest international standards organizations: IEEE Committee 802 (Institute of Electrical and Electronic Engineers) and ECMA (European Computer Manufacturers Association).

    The standard is called IEEE 802.3 (read in English as “eight oh two dot three”). It defines multiple access to a mono bus type channel with collision detection and transmission control. Some other networks also met this standard, since its level of detail is low. As a result, IEEE 802.3 networks were often incompatible with each other in both design and electrical characteristics. However, recently the IEEE 802.3 standard has been considered the standard for the Ethernet network.

    Main characteristics of the original IEEE 802.3 standard:

    • topology – bus;
    • transmission medium – coaxial cable;
    • transmission speed – 10 Mbit/s;
    • maximum network length – 5 km;
    • maximum number of subscribers – up to 1024;
    • network segment length – up to 500 m;
    • number of subscribers on one segment – ​​up to 100;
    • access method – CSMA/CD;
    • Narrowband transmission, that is, without modulation (mono channel).

    Strictly speaking, there are minor differences between the IEEE 802.3 and Ethernet standards, but they are usually ignored.

    The Ethernet network is now the most popular in the world (more than 90% of the market), and presumably it will remain so in the coming years. This was greatly facilitated by the fact that from the very beginning the characteristics, parameters, and protocols of the network were open, as a result of which a huge number of manufacturers around the world began to produce Ethernet equipment that was fully compatible with each other.

    The classic Ethernet network used 50-ohm coaxial cable of two types (thick and thin). However, recently (since the early 90s), the most widely used version of Ethernet is that using twisted pairs as a transmission medium. A standard has also been defined for use in fiber optic cable networks. Additions have been made to the original IEEE 802.3 standard to accommodate these changes. In 1995, an additional standard appeared for a faster version of Ethernet operating at a speed of 100 Mbit/s (the so-called Fast Ethernet, IEEE 802.3u standard), using twisted pair or fiber optic cable as the transmission medium. In 1997, a version with a speed of 1000 Mbit/s (Gigabit Ethernet, IEEE 802.3z standard) also appeared.

    In addition to the standard bus topology, passive star and passive tree topologies are increasingly being used.


    Classic Ethernet network topology

    The maximum cable length of the network as a whole (maximum signal path) can theoretically reach 6.5 kilometers, but practically does not exceed 3.5 kilometers.

    A Fast Ethernet network does not have a physical bus topology; only a passive star or passive tree is used. In addition, Fast Ethernet has much more stringent requirements for the maximum network length. After all, with a 10-fold increase in transmission speed and preservation of the packet format, its minimum length becomes ten times shorter. Thus, the permissible value of double signal transmission time through the network is reduced by 10 times (5.12 μs versus 51.2 μs in Ethernet).

    The standard Manchester code is used to transmit information on an Ethernet network.

    Access to the Ethernet network is carried out using the random CSMA/CD method, ensuring equality of subscribers. The network uses packets of variable length with structure.

    For an Ethernet network operating at a speed of 10 Mbit/s, the standard defines four main types of network segments, focused on different information transmission media:

    • 10BASE5 (thick coaxial cable);
    • 10BASE2 (thin coaxial cable);
    • 10BASE-T (twisted pair);
    • 10BASE-FL (fiber optic cable).

    The name of the segment includes three elements: the number “10” means a transmission speed of 10 Mbit/s, the word BASE means transmission in the base frequency band (that is, without modulating a high-frequency signal), and the last element is the permissible length of the segment: “5” – 500 meters, “2” – 200 meters (more precisely, 185 meters) or type of communication line: “T” – twisted pair (from the English “twisted-pair”), “F” – fiber optic cable (from the English “fiber optic”).

    Similarly, for an Ethernet network operating at a speed of 100 Mbit/s (Fast Ethernet), the standard defines three types of segments, differing in the types of transmission media:

    • 100BASE-T4 (quad twisted pair);
    • 100BASE-TX (dual twisted pair);
    • 100BASE-FX (fiber optic cable).

    Here, the number “100” means a transmission speed of 100 Mbit/s, the letter “T” means twisted pair, and the letter “F” means fiber optic cable. The types 100BASE-TX and 100BASE-FX are sometimes combined under the name 100BASE-X, and 100BASE-T4 and 100BASE-TX are called 100BASE-T.

    The development of Ethernet technology is moving further and further away from the original standard. The use of new transmission media and switches makes it possible to significantly increase the size of the network. Elimination of the Manchester code (in Fast Ethernet and Gigabit Ethernet networks) provides increased data transfer speeds and reduced cable requirements. Refusal of the CSMA/CD control method (with full-duplex exchange mode) makes it possible to dramatically increase operating efficiency and remove restrictions on network length. However, all new varieties of network are also called Ethernet network.

    2 TOKEN-RING NETWORK

    The Token-Ring network was proposed by IBM in 1985 (the first version appeared in 1980). It was intended to network all types of computers produced by IBM. The very fact that it is supported by IBM, the largest manufacturer of computer equipment, suggests that it needs to be given special attention. But equally important is that Token-Ring is currently the international standard IEEE 802.5 (although there are minor differences between Token-Ring and IEEE 802.5). This puts this network on the same level of status as Ethernet.

    Token-Ring was developed as a reliable alternative to Ethernet. And although Ethernet is now replacing all other networks, Token-Ring cannot be considered hopelessly outdated. More than 10 million computers around the world are connected by this network.

    IBM has done everything to ensure the widest possible distribution of its network: detailed documentation was released, right down to the circuit diagrams of the adapters. As a result, many companies, for example, 3COM, Novell, Western Digital, Proteon and others began producing adapters. By the way, the NetBIOS concept was developed specifically for this network, as well as for another network, the IBM PC Network. If in the previously created PC Network NetBIOS programs were stored in the built-in read-only memory of the adapter, then in the Token-Ring network a program emulating NetBIOS was already used. This made it possible to respond more flexibly to hardware features and maintain compatibility with higher-level programs.

    The rapid development of local networks, which today has been further embodied in the 10 Gigabit Ethernet standard and IEEE 802.11b/a wireless network technologies, is attracting more and more attention. Ethernet technology has now become the de facto standard for cable networks. And although Ethernet technology has not been found in its classical form for a long time, the ideas that were originally laid down in the IEEE 802.3 protocol received their logical continuation in both Fast Ethernet and Gigabit Ethernet technologies. For the sake of historical justice, we note that technologies such as Token Ring, ARCNET, 100VG-AnyLAN, FDDI and Apple Talk also deserve attention. Well. Let's restore historical justice and remember the technologies of bygone days.

    I think there is no need to talk about the rapid progress in the semiconductor industry observed in the last decade. Network equipment suffered the same fate as the entire industry: an avalanche-like growth in production, high speeds and minimal prices. In 1995, which is considered a turning point in the history of the Internet, about 50 million new Ethernet ports were sold. A good start for market dominance, which became overwhelming over the next five years.

    This price level is not available for specialized telecommunications equipment. The complexity of the device does not play a special role in this case - it is rather a question of quantity. Now this seems quite natural, but ten years ago the unconditional dominance of Ethernet was far from obvious (for example, in industrial networks there is still no clear leader).

    However, only in comparison with other methods of building networks can one identify the advantages (or disadvantages) of today's leader.

    Basic methods of accessing the medium to the transmission medium

    The physical principles according to which the equipment operates are not overly complex. According to the method of gaining access to the transmission medium, they can be divided into two classes: deterministic and non-deterministic.

    With deterministic access methods, the transmission medium is distributed between nodes using a special control mechanism that guarantees the transmission of node data within a certain period of time.

    The most common (but far from the only) deterministic access methods are the polling method and the transfer of rights method. The polling method is of little use in local networks, but is widely used in industry to control technological processes.

    The transfer of rights method, on the contrary, is convenient for transferring data between computers. The principle of operation is to transmit a service message - a token - over a network with a ring logical topology.

    Receiving a token grants the device the right to access the shared resource. The workstation's choice in this case is limited to only two options. In any case, it must send the token to the next device in line. Moreover, this can be done after delivery of the data to the recipient (if available) or immediately (if there is no information that needs to be transferred). During the passage of data, the marker is absent in the network, other stations have no transmission capability, and collisions are impossible in principle. To handle possible errors, as a result of which the token may be lost, there is a mechanism for its regeneration.

    Random access methods are called non-deterministic. They provide for competition between all network nodes for the right to transmit. Simultaneous transmission attempts by several nodes are possible, resulting in collisions.

    The most common method of this type is CSMA/CD (carrier-sense multiple access/collision detection). Before transmitting data, the device “listens” to the network to ensure that no one else is using it. If the transmission medium is being used by someone at this moment, the adapter delays the transmission, but if not, it begins to transmit data.

    In the case when two adapters, having detected a free line, start transmitting simultaneously, a collision occurs. When it is detected, both transmissions are interrupted and the devices repeat the transmission after some arbitrary time (of course, after first “listening” to the channel again to see if it is busy). To receive information, a device must receive all packets on the network to determine whether it is the destination.

    From the history of Ethernet

    If we started looking at LANs with any other technology, we would be missing the real importance that Ethernet currently has in this area. Whether due to the prevailing circumstances or due to technical advantages, it has no competition today, occupying about 95% of the market.

    Ethernet's birthday is May 22, 1973. It was on this day that Robert Metcalfe and David Boggs published a description of the experimental network they had built at the Xerox Research Center. It was based on a thick coaxial cable and provided a data transfer rate of 2.94 Mbit/s. The new technology was named Ethernet (over-the-air network), in honor of the ALOHA University of Hawaii radio network, which used a similar mechanism for dividing the transmission medium (radio air).

    By the end of the 70s, Ethernet had a solid theoretical basis. And in February 1980, Xerox, together with DEC and Intel, presented the IEEE development, which three years later was approved as the 802.3 standard.

    Ethernet's non-deterministic method for gaining access to the data transmission medium is carrier sense multiple access with collision detection (CSMA/CD). Simply put, devices share the transmission medium chaotically, randomly. In this case, the algorithm can lead to far from equal resolution of competition between stations for access to the medium. This, in turn, can cause long access delays, especially under congested conditions. In extreme cases, the transmission speed can drop to zero.

    Because of this disorganized approach, it was long believed (and still is) that Ethernet does not provide high-quality data transmission. It was predicted that it would be replaced first by Token Ring, then by ATM, but in reality everything happened the other way around.

    The fact that Ethernet still dominates the market is due to the great changes it has undergone during its 20-year existence. That “gigabit” in full duplex, which we now see in entry-level networks, bears little resemblance to the founder of the 10Base 5 family. At the same time, after the introduction of 10Base-T, compatibility is maintained both at the level of device interaction and at the cable infrastructure level.

    Development from simple to complex, growth along with user needs - this is the key to the incredible success of the technology. Judge for yourself:

    • March 1981 - 3Com introduces an Ethernet transceiver;
    • September 1982 - the first network adapter for a personal computer was created;
    • 1983 - the IEEE 802.3 specification appeared, the bus topology of the 10Base 5 (thick Ethernet) and 10Base 2 (thin Ethernet) network was defined. Transfer speed - 10 Mbit/s. The maximum distance between points of one segment is set at 2.5 km;
    • 1985 - The second version of the IEEE 802.3 (Ethernet II) specification was released, in which minor changes were made to the packet header structure. A rigid identification of Ethernet devices (MAC addresses) has been formed. An address list has been created where any manufacturer can register a unique range (currently costs only $1,250);
    • September 1990 - IEEE approves 10Base-T (twisted pair) technology with a physical star topology and hubs. The CSMA/CD logical topology has not changed. The standard is based on developments by SynOptics Communications under the general name LattisNet;
    • 1990 - Kalpana (later it was quickly purchased along with the CPW16 switch developed by the future giant Cisco) offers switching technology based on the refusal to use shared communication lines between all nodes of the segment;
    • 1992 - the beginning of the use of switches (swich). Using the address information contained in the packet (MAC address), the switch organizes independent virtual channels between pairs of nodes. Switching effectively transforms the non-deterministic Ethernet model (with contention for bandwidth) into a data-addressed system without the user's attention;
    • 1993 - IEEE 802.3x specification, full duplex and connection control for 10Base-T appears, IEEE 802.1p specification adds multicast addressing and an 8-level priority system. Fast Ethernet proposed;
    • Fast Ethernet, IEEE 802.3u (100Base-T) standard, was introduced in June 1995.

    This is where the short story can end: Ethernet has taken on quite modern shapes, but the development of technology, of course, has not stopped - we will talk about this a little later.

    Undeservedly forgotten ARCNET

    ttached Resource Computing Network (ARCNET) is a network architecture developed by Datapoint in the mid-70s. ARCNET has not been adopted as an IEEE standard, but partially complies with IEEE 802.4 as a token-passing network (logical ring). The data packet can be any size ranging from 1 to 507 bytes.

    Of all local networks, ARCNET has the most extensive topology capabilities. Ring, common bus, star, tree can be used in the same network. In addition to this, very long segments (up to several kilometers) can be used. The same wide possibilities apply to the transmission medium - both coaxial and fiber optic cables, as well as twisted pair, are suitable.

    This inexpensive standard was prevented from dominating the market by its low speed - only 2.5 Mbit/s. When Datapoint developed ARCNET PLUS with transfer speeds of up to 20 Mbit/s in the early 1990s, time had already passed. Fast Ethernet did not leave ARCNET the slightest chance for widespread use.

    Nevertheless, in favor of the great (but never realized) potential of this technology, we can say that in some industries (usually process control systems) these networks still exist. Deterministic access, auto-configuration capabilities, and negotiation of exchange rates in the range from 120 Kbit/s to 10 Mbit/s in difficult real production conditions make ARCNET simply irreplaceable.

    In addition, ARCNET provides the ability, necessary for control systems, to accurately determine the maximum access time to any device on the network under any load using a simple formula: T = (TDP + TOBSNb)SND, where TDP and TOB are the transmission time of a data packet and one byte, respectively, depending on the selected transmission speed, Nb is the number of data bytes, ND is the number of devices on the network.

    Token Ring is a classic example of token passing

    oken Ring is another technology that dates back to the 70s. This development of the blue giant - IBM, which is the basis of the IEEE 802.5 standard, had a greater chance of success than many other local networks. Token Ring is a classic token-passing network. The logical topology (and physical in the first versions of the network) is a ring. More modern modifications are built on twisted pair cables in a star topology, and with some reservations are compatible with Ethernet.

    The original transmission speed described in IEEE 802.5 was 4 Mbit/s, but a more recent implementation of 16 Mbit/s exists. Because of its more streamlined (deterministic) method of accessing the medium, Token Ring was often promoted in its early stages as a superior replacement for Ethernet.

    Despite the existence of a priority access scheme (which was assigned to each station individually), it was not possible to provide a constant bit rate (Constant Bit Rate, CBR) for a very simple reason: applications that could take advantage of these schemes did not exist then. And nowadays there are not much more of them.

    Given this circumstance, it was only possible to guarantee that the performance for all stations in the network would decrease equally. But this was not enough to win the competition, and now it is almost impossible to find a really working Token Ring network.

    FDDI - the first local network on fiber optics

    Fiber Distributed Data Interface (FDDI) technology was developed in 1980 by an ANSI committee. It was the first computer network to use only fiber optic cable as a transmission medium. The reasons that prompted manufacturers to create FDDI were the insufficient speed (no more than 10 Mbit/s) and reliability (lack of redundancy schemes) of local networks at that time. In addition, this was the first (and not very successful) attempt to bring data networks to the “transport” level, competing with SDH.

    The FDDI standard stipulates data transmission over a double ring of fiber optic cable at a speed of 100 Mbit/s, which allows you to obtain a reliable (reserved) and fast channel. The distances are quite significant - up to 100 km around the perimeter. Logically, the network’s operation was based on the transfer of a token.

    Additionally, a developed traffic prioritization scheme was provided. At first, workstations were divided into two types: synchronous (having a constant bandwidth) and asynchronous. The latter, in turn, distributed the transmission medium using an eight-level priority system.

    Incompatibility with SDH networks did not allow FDDI to occupy any significant niche in the field of transport networks. Today this technology has practically been replaced by ATM. And the high cost left FDDI no chance in the fight with Ethernet for the local niche. Attempts to switch to cheaper copper cable did not help the standard either. CDDI technology, based on the principles of FDDI, but using twisted pair cables as a transmission medium, was not popular and was preserved only in textbooks.

    Developed by AT&T and HP - 100VG-AnyLAN

    that technology, like FDDI, can be classified as the second generation of local networks. It was created in the early 90s by joint efforts of AT&T and HP as an alternative to Fast Ethernet technology. In the summer of 1995, almost simultaneously with its competitor, it received the status of the IEEE 802.12 standard. 100VG-AnyLAN had a good chance of winning due to its versatility, determinism, and greater compatibility than Ethernet with existing cable networks (twisted pair category 3).

    The Quartet Coding scheme, using 5V/6V redundant code, made it possible to use 4-pair category 3 twisted pair cable, which was then almost more common than modern category 5. The transition period, in fact, did not affect Russia, where, due to the later start of construction of communication systems, networks were laid everywhere using the 5th category.

    In addition to using legacy wiring, each 100VG-AnyLAN hub can be configured to support 802.3 (Ethernet) frames or 802.5 (Token Ring) frames. The Demand Priority media access method defines a simple two-level priority system - high for multimedia applications and low for everything else.

    I must say, this was a serious bid for success. Let down by the high cost, due to the greater complexity and, to a large extent, the technology being closed to replication by third-party manufacturers. Added to this is the already familiar Token Ring lack of real applications that take advantage of the priority system. As a result, 100Base-T managed to permanently and definitively seize leadership in the industry.

    Innovative technical ideas a little later found application, first in 100Base-T2 (IEEE 802.3у), and then in “gigabit” Ethernet 1000Base-T.

    Apple Talk, Local Talk

    Apple Talk is a protocol stack proposed by Apple in the early 80s. Initially, Apple Talk protocols were used to work with network equipment, collectively called Local Talk (adapters built into Apple computers).

    The network topology was built as a common bus or “tree”, its maximum length was 300 m, the transmission speed was 230.4 Kbps. The transmission medium is shielded twisted pair. The Local Talk segment could connect up to 32 nodes.

    Low bandwidth quickly necessitated the development of adapters for higher bandwidth network environments: Ether Talk, Token Talk, and FDDI Talk for Ethernet, Token Ring, and FDDI networks, respectively. Thus, Apple Talk has gone the way of universality at the link level and can adapt to any physical implementation of the network.

    Like most other Apple products, these networks live within the “Apple” world and have virtually no overlap with PCs.

    UltraNet - network for supercomputers

    Another virtually unknown type of network in Russia is UltraNet. It was actively used to work with supercomputer-class computing systems and mainframes, but is currently being actively replaced by Gigabit Ethernet.

    UltraNet uses a star topology and is capable of providing information exchange speeds between devices up to 1 Gbit/s. This network is characterized by a very complex physical implementation and very high prices, comparable to supercomputers. To control UltraNet, PC computers are used, which are connected to a central hub. Additionally, the network may include bridges and routers for connecting to networks built using Ethernet or Token Ring technologies.

    Coaxial cable and optical fiber can be used as transmission media (for distances up to 30 km).

    Industrial and specialized networks

    It should be noted that data networks are used not only for communication between computers or for telephony. There is also a fairly large niche of industrial and specialized devices. For example, CANBUS technology is quite popular, created to replace thick and expensive wiring harnesses in cars with one common bus. This network does not have a large selection of physical connections, the segment length is limited, and the transmission speed is low (up to 1 Mbit/s). However, CANBUS is a successful combination of quality indicators and low price implementations necessary for small and medium-sized automation. Similar systems also include ModBus, PROFIBUS, FieldBus.

    Today, the interests of CAN controller developers are gradually shifting towards home automation.

    ATM as a universal data transmission technology

    It is not for nothing that the description of the ATM standard is placed at the end of the article. This is perhaps one of the last, but unsuccessful attempts to give battle to Ethernet on its field. These technologies are the complete opposite of each other in terms of the history of creation, the course of implementation and ideology. If Ethernet rose “from the bottom up, from the specific to the general”, increasing speed and quality, following the needs of users, then ATM developed completely differently.

    In the mid-1980s, the American National Standards Institute (ANSI) and the International Consultative Committee on Telephony and Telegraphy (CCITT) began developing the ATM (Asynchronous Transfer Mode) standards as a set of recommendations for the B-ISDN (Broadband Integrated) network. Services Digital Network). Only in 1991, the efforts of academic science culminated in the creation of the ATM Forum, which still determines the development of technology. The first major project made using this technology in 1994 was the backbone of the famous NSFNET network, which previously used the T3 channel.

    The essence of ATM is very simple: you need to mix all types of traffic (voice, video, data), compress it and transmit it over one communication channel. As noted above, this is achieved not through any technical breakthroughs, but rather through numerous compromises. In some ways this is similar to the way of solving differential equations. Continuous data is divided into intervals that are small enough to perform switching operations.

    Naturally, this approach greatly complicated the already difficult task of developers and manufacturers of real equipment and delayed the implementation timeframe unacceptably for the market.

    The size of the minimum portion of data (cells - in ATM terminology) is influenced by several factors. On the one hand, increasing the size reduces the speed requirements of the cell processor-switch and increases the efficiency of channel utilization. On the other hand, the smaller the cell, the faster transmission is possible.

    Indeed, while one cell is being transmitted, the second (even the highest priority) is waiting. Strong mathematics, the mechanism of queues and priorities can slightly smooth out the effect, but not eliminate the cause. After quite a lot of experimentation, in 1989 the cell size was determined to be 53 bytes (5 bytes of service and 48 bytes of data). Obviously, this size may be different for different speeds. If for speeds from 25 to 155 Mbit/s a size of 53 bytes is suitable, then for a gigabit 500 bytes will be no worse, and for 10 gigabits 5000 bytes are also suitable. But in this case the compatibility problem becomes insoluble. The reasoning is by no means academic in nature - it was the limitation on the switching speed that set the technical limit for increasing ATM speeds beyond 622 Mbit and sharply increased the cost at lower speeds.

    The second compromise of ATM is connection-oriented technology. Before a transmission session, a sender-receiver virtual channel is established at the data link layer, which cannot be used by other stations, whereas in traditional statistical multiplexing technologies no connection is established, and packets with the specified address are placed on the transmission medium. To do this, the port number and connection identifier, which is present in the header of each cell, are entered into the switching table. Subsequently, the switch processes incoming cells based on the connection IDs in their headers. Based on this mechanism, it is possible to regulate the throughput, delay and maximum data loss for each connection - that is, to ensure a certain quality of service.

    All of the above properties, plus good compatibility with the SDH hierarchy, allowed ATM to relatively quickly become the standard for backbone data networks. But with the full implementation of all the capabilities of the technology, big problems arose. As has happened more than once, local networks and client applications did not support ATM functions, and without this, a powerful technology with great potential turned out to be just an unnecessary conversion between the worlds of IP (essentially Ethernet) and SDH. This was a very unfortunate situation that the ATM community tried to correct. Unfortunately, there were some strategic miscalculations. Despite all the advantages of fiber optics over copper cables, the high cost of interface cards and switch ports made 155 Mbps ATM extremely expensive for use in this market segment.

    In an attempt to define low-speed solutions for desktop systems, the ATM Forum became embroiled in a destructive debate over what speed and connection type should be targeted. Manufacturers are divided into two camps: supporters of copper cable with a speed of 25.6 Mbit/s and supporters of optical cable with a speed of 51.82 Mbit/s. After a number of high-profile conflicts (the speed initially chosen was 51.82 Mbit/s), the ATM Forum proclaimed 25 Mbit/s as the standard. But precious time was lost forever. In the technology market, we had to meet not with “classic” Ethernet with its shared transmission medium, but with Fast Ethernet and switched 10Base-T (with the hope of the soon appearance of switched 100Base-T). High price, small number of manufacturers, need for more qualified service, problems with drivers, etc. only made the situation worse. Hopes for penetration into the corporate network segment collapsed, and the rather weak intermediate position of ATM was consolidated for some time. This is its position in the industry today.

    ComputerPress 10"2002

    Ministry of Education and Science of the Russian Federation

    Novosibirsk State Technical University

    Department of VT

    DIV_ADBLOCK208">

    The term “network technology” is most often used in the narrow sense described above, but sometimes its expanded interpretation is also used as any set of tools and rules for building a network, for example, “end-to-end routing technology,” “secure channel technology,” “IP technology.” networks."

    The protocols on which a network of a certain technology is built (in the narrow sense) were specifically developed for joint work, so the network developer does not require additional efforts to organize their interaction. Sometimes network technologies are called basic technologies, meaning that the basis of any network is built on their basis. Examples of basic network technologies include, in addition to Ethernet, such well-known local network technologies as Token Ring and FDDI, or X.25 and frame relay technologies for territorial networks. To obtain a functional network in this case, it is enough to purchase software and hardware related to the same basic technology - network adapters with drivers, hubs, switches, cable system, etc. - and connect them in accordance with the requirements of the standard for this technology.

    The basic network technologies Token Ring, FDDI, l00VGAny-LAN, although they have many individual features, at the same time have many common properties with Ethernet. First of all, this is the use of regular fixed topologies (hierarchical star and ring), as well as shared data transmission media. Significant differences between one technology and another are associated with the characteristics of the method used to access the shared environment. Thus, the differences between Ethernet technology and Token Ring technology are largely determined by the specifics of the medium separation methods embedded in them - the random access algorithm in Ethernet and the access method by Passing a token to Token Ring.

    2. Ethernet technology (802.3).

    2.1. Main characteristics of the technology.

    Ethernet is the most widespread local network standard today. The total number of networks currently operating using the Ethernet protocol is estimated at 5 million, and the number of computers with Ethernet network adapters installed is 50 million.

    When people say Ethernet, they usually mean any of the variants of this technology. In a narrower sense, Ethernet is a network standard based on the experimental Ethernet Network, which Xerox developed and implemented in 1975. The access method was tested even earlier: in the second half of the 60s, the University of Hawaii radio network used various options for random access to the general radio environment, collectively called Aloha. In 1980, DEC, Intel, and Xerox jointly developed and published the Ethernet version II standard for coaxial cable networks, which became the final version of the proprietary Ethernet standard. Therefore, the proprietary version of the Ethernet standard is called the Ethernet DIX or Ethernet II standard.

    Based on the Ethernet DIX standard, the IEEE 802.3 standard was developed, which largely coincides with its predecessor, but there are still some differences. While IEEE 802.3 distinguishes between the MAC and LLC layers, original Ethernet combines both layers into a single data link layer. Ethernet DIX defines an Ethernet Configuration Test Protocol, which is not found in IEEE 802.3. The frame format is also somewhat different, although the minimum and maximum frame sizes in these standards are the same. Often, in order to distinguish Ethernet, defined by the IEEE standard, and proprietary Ethernet DIX, the first is called 802.3 technology, and the proprietary name is left behind the name Ethernet without additional designations.

    Depending on the type of physical medium, the IEEE 802.3 standard has various modifications - l0Base-5, l0Base-2, l0Base-T, l0Base-FL, l0Base-FB.

    In 1995, the Fast Ethernet standard was adopted, which in many ways is not an independent standard, as evidenced by the fact that its description is simply an additional section to the main 802.3 standard - section 802.3h. Similarly, the Gigabit Ethernet standard adopted in 1998 is described in section 802.3z of the main document.

    Manchester code is used to transmit binary information over the cable for all variants of the physical layer of Ethernet technology that provide a throughput of 10 Mbit/s.

    All types of Ethernet standards (including Fast Ethernet and Gigabit Ethernet) use the same method of separating the data transmission medium - the CSMA/CD method.

    2.2. Access method CSMA/CD.

    Ethernet networks use a medium access method called carrier-sense-multiply-access with collision detection (CSMA/CD).

    This method is used exclusively in networks with a logical common bus (which includes the radio networks that gave rise to this method). All computers on such a network have direct access to a common bus, so it can be used to transfer data between any two network nodes. At the same time, all computers on the network have the opportunity to immediately (taking into account the delay in signal propagation through the physical medium) receive data that any of the computers began to transmit to the common bus (Fig. 1.). The simplicity of the connection scheme is one of the factors that determined the success of the Ethernet standard. They say that the cable to which all stations are connected operates in Multiply Access (MA) mode.

    Rice. 1. Random access method CSMA/CD

    Stages of access to the environment

    All data transmitted over the network is placed in frames of a certain structure and provided with a unique address of the destination station.

    To be able to transmit a frame, the station must ensure that the shared medium is clear. This is achieved by listening to the fundamental harmonic of the signal, which is also called the carrier-sense (CS). A sign of an unoccupied medium is the absence of a carrier frequency on it, which with the Manchester coding method is 5-10 MHz, depending on the sequence of ones and zeros transmitted at the moment.

    If the medium is free, then the node has the right to start transmitting the frame. This frame is shown in Fig. 1. first. Knot 1 discovered that the medium was clear and began transmitting his frame. In a classic Ethernet network on a coaxial cable, the node transmitter signals 1 are distributed in both directions, so that all network nodes receive them. The data frame is always accompanied preamble, which consists of 7 bytes consisting of values ​​and an 8th byte equal to. The preamble is needed for the receiver to enter bit-by-byte synchronization with the transmitter.

    All stations connected to the cable can recognize that a frame has been transmitted, and whichever station recognizes its own address in the frame's headers writes its contents to its internal buffer, processes the received data, passes it up its stack, and then sends the frame down the cable -answer. The address of the source station is contained in the original frame, so the destination station knows who to send the response to.

    Knot 2 during frame transmission by the node 1 also tried to start transmitting its frame, but found that the medium was busy - there was a carrier frequency on it - so the node 2 forced to wait until the node 1 will not stop transmitting the frame.

    After the end of frame transmission, all network nodes are required to withstand a technological pause (Inter Packet Gap) of 9.6 μs. This pause, also called the interframe interval, is needed to bring the network adapters to their original state, as well as to prevent exclusive seizure of the environment by one station. After the end of the technological pause, nodes have the right to begin transmitting their frame, since the medium is free. Due to delays in signal propagation along the cable, not all nodes strictly simultaneously record the fact that the node has completed frame transmission 1.

    In the example given, the node 2 waited for the end of the frame transmission by the node 1, paused at 9.6 microseconds and began transmitting its frame.

    Occurrence of a collision

    With the described approach, it is possible that two stations simultaneously try to transmit a data frame over a common medium. The medium listening mechanism and the pause between frames do not guarantee against the occurrence of a situation where two or more stations simultaneously decide that the medium is clear and begin transmitting their frames. They say what happens collision, Since the contents of both frames collide on a common cable and information is distorted, the encoding methods used in Ethernet do not allow the signals of each station to be separated from the common signal.

    NOTE: Note that this fact is reflected in the “Base(band)” component present in the names of all physical protocols of Ethernet technology (for example, 10Base-2,10Base-T, etc.). Baseband network means a baseband network in which messages are sent digitally over a single channel, without frequency division.

    Collision is a normal situation in Ethernet networks. In the example shown in Fig. 2, the collision was caused by the simultaneous transmission of data by nodes 3 and U. For a collision to occur, it is not necessary that several stations start transmitting absolutely simultaneously; such a situation is unlikely. It is much more likely that a collision occurs due to the fact that one node starts transmitting earlier than the other, but the signals of the first simply do not have time to reach the second node by the time the second node decides to start transmitting its frame. That is, collisions are a consequence of the distributed nature of the network.

    To correctly handle a collision, all stations simultaneously monitor the signals appearing on the cable. If the transmitted and observed signals differ, then the collision detection (CD). To increase the likelihood of early detection of a collision by all stations on the network, the station that has detected a collision interrupts the transmission of its frame (in an arbitrary place, perhaps not on a byte boundary) and strengthens the collision situation by sending a special sequence of 32 bits to the network, called jam sequence.

    Rice. 2. Diagram of the occurrence and propagation of a collision

    After this, the transmitting station that detects the collision must stop transmitting and pause for a short random interval of time. It can then attempt to capture the medium again and transmit the frame. A random pause is selected using the following algorithm:

    Pause = L *(delay interval),

    where the delay interval is equal to 512 bit intervals (in Ethernet technology, it is customary to measure all intervals in bit intervals; the bit interval is denoted as bt and corresponds to the time between the appearance of two consecutive data bits on the cable; for a speed of 10 Mbit/s, the bit interval is 0.1 μs or 100 ns);

    L is an integer selected with equal probability from the range , where N is the number of retransmission of this frame: 1,2,..., 10.

    After the 10th attempt, the interval from which the pause is selected does not increase. Thus, a random pause can take values ​​from 0 to 52.4 ms.

    If 16 consecutive attempts to transmit a frame cause a collision, then the transmitter must stop trying and discard the frame.

    From the description of the access method it is clear that it is probabilistic in nature, and the probability of successfully obtaining a common medium at its disposal depends on the network load, that is, on the intensity of the need for frame transmission in stations. When this method was developed in the late 70s, it was assumed that the data transfer rate of 10 Mbit/s was very high compared to the needs of computers for mutual data exchange, so the network load would always be light. This assumption remains sometimes true to this day, but there are already real-time multimedia applications that put a lot of load on Ethernet segments. In this case, collisions occur much more often. When the intensity of collisions is significant, the useful throughput of the Ethernet network drops sharply, since the network is almost constantly busy with repeated attempts to transmit frames. To reduce the intensity of collisions, you need to either reduce traffic, for example, reducing the number of nodes in a segment or replacing applications, or increase the speed of the protocol, for example, switch to Fast Ethernet.

    It should be noted that the CSMA/CD access method does not at all guarantee that a station will ever be able to access the medium. Of course, when the network load is light, the probability of such an event is small, but when the network utilization factor approaches 1, such an event becomes very likely. This drawback of the random access method is the price to pay for its extreme simplicity, which has made Ethernet the most inexpensive technology. Other access methods - token access of Token Ring and FDDI networks, Demand Priority method of 100VG-AnyLAN networks - are free from this drawback.

    Double rotation time and collision detection

    Clear recognition of collisions by all network stations is a necessary condition for the correct operation of the Ethernet network. If any transmitting station does not recognize the collision and decides that it transmitted the data frame correctly, then this data frame will be lost. Due to the overlap of signals during a collision, the frame information will be distorted, and it will be rejected by the receiving station (possibly due to a checksum mismatch). Most likely, the corrupted information will be retransmitted by some upper-layer protocol, such as a connection-oriented transport or application protocol. But the retransmission of the message by upper-level protocols will occur after a much longer time interval (sometimes even after several seconds) compared to the microsecond intervals that the Ethernet protocol operates. Therefore, if collisions are not reliably recognized by Ethernet network nodes, this will lead to a noticeable decrease in the useful throughput of this network.

    For reliable collision detection, the following relationship must be satisfied:

    where Tmin is the transmission time of a frame of minimum length, and PDV is the time during which the collision signal manages to propagate to the farthest node in the network. Since in the worst case the signal must travel twice between the stations of the network that are most distant from each other (an undistorted signal passes in one direction, and a signal already distorted by a collision propagates on the way back), this time is called double revolution time (Path Delay Value, PDV).

    If this condition is met, the transmitting station must be able to detect the collision caused by its transmitted frame even before it finishes transmitting this frame.

    Obviously, the fulfillment of this condition depends, on the one hand, on the length of the minimum frame and network capacity, and on the other hand, on the length of the network cable system and the speed of signal propagation in the cable (this speed is slightly different for different types of cable).

    All parameters of the Ethernet protocol are selected in such a way that during normal operation of network nodes, collisions are always clearly recognized. When choosing parameters, of course, the above relationship was taken into account, connecting the minimum frame length and the maximum distance between stations in a network segment.

    The Ethernet standard assumes that the minimum length of a frame data field is 46 bytes (which, together with service fields, gives a minimum frame length of 64 bytes, and together with the preamble - 72 bytes or 576 bits). From here a limit on the distance between stations can be determined.

    So, in 10 Mbit Ethernet, the minimum frame length transmission time is 575 bit intervals, therefore, the double turnaround time should be less than 57.5 μs. The distance that the signal can travel in this time depends on the type of cable and for thick coaxial cable is approximate. Considering that during this time the signal must travel along the communication line twice, the distance between two nodes should not be more than 6,635 m. In the standard, the value of this distance is chosen significantly less, taking into account other, more stringent restrictions.

    One of these restrictions is related to the maximum permissible signal attenuation. To ensure the required signal power when it passes between the most distant stations of a cable segment, the maximum length of a continuous segment of a thick coaxial cable, taking into account the attenuation it introduces, was chosen to be 500 m. Obviously, on a 500 m cable, the conditions for collision recognition will be met with a large margin for frames of any standard length, including 72 bytes (the double turnaround time along a 500 m cable is only 43.3 bit intervals). Therefore, the minimum frame length could be set even shorter. However, technology developers did not reduce the minimum frame length, keeping in mind multi-segment networks that are built from several segments connected by repeaters.

    Repeaters increase the power of signals transmitted from segment to segment, as a result, signal attenuation is reduced and a much longer network can be used, consisting of several segments. In coaxial Ethernet implementations, designers have limited the maximum number of segments in the network to five, which in turn limits the total network length to 2500 meters. Even in such a multi-segment network, the collision detection condition is still met with a large margin (let us compare the distance of 2500 m obtained from the permissible attenuation condition with the maximum possible distance of 6635 m in terms of signal propagation time calculated above). However, in reality, the time margin is significantly less, since in multi-segment networks the repeaters themselves introduce an additional delay of several tens of bit intervals into the signal propagation. Naturally, a small margin was also made to compensate for deviations in cable and repeater parameters.

    As a result of taking into account all these and some other factors, the ratio between the minimum frame length and the maximum possible distance between network stations was carefully selected, which ensures reliable collision recognition. This distance is also called the maximum network diameter.

    As the frame transmission rate increases, which occurs in new standards based on the same CSMA/CD access method, such as Fast Ethernet, the maximum distance between network stations decreases in proportion to the increase in transmission rate. In the Fast Ethernet standard it is about 210 m, and in the Gigabit Ethernet standard it would be limited to 25 meters if the developers of the standard had not taken some measures to increase the minimum packet size.

    In table 2. The values ​​of the main parameters of the 802.3 frame transmission procedure are given, which do not depend on the implementation of the physical medium. It is important to note that each Ethernet physical environment option adds to these limitations its own, often more stringent, limitations that must also be met.

    Table 2. MAC Ethernet Layer Parameters

    3. Token Ring technology (802.5).

    3.1. Main characteristics of the technology.

    Token Ring networks, like Ethernet networks, are characterized by a shared data transmission medium, which in this case consists of cable segments connecting all network stations into a ring. The ring is considered as a common shared resource, and access to it requires not a random algorithm, as in Ethernet networks, but a deterministic one, based on transferring the right to use the ring to stations in a certain order. This right is conveyed using a special format frame called marker or token.

    Token Ring technology was developed by IBM in 1984 and then submitted as a draft standard to the IEEE 802 committee, which based it on it adopted the 802.5 standard in 1985. IBM uses Token Ring technology as its main network technology to build local networks based on computers of various classes - mainframes, minicomputers and personal computers. Currently, IBM is the main trendsetter of Token Ring technology, producing about 60% of network adapters for this technology.

    Token Ring networks operate at two bit rates - 4 and 16 Mbit/s. Mixing stations operating at different speeds in one ring is not allowed. Token Ring networks operating at 16 Mbps have some improvements in the access algorithm compared to the 4 Mbps standard.

    Token Ring technology is a more complex technology than Ethernet. It has fault tolerance properties. The Token Ring network defines network operation control procedures that use feedback of a ring-shaped structure - the sent frame always returns to the sending station. In some cases, detected errors in the network operation are eliminated automatically, for example, a lost token can be restored. In other cases, errors are only recorded, and their elimination is carried out manually by maintenance personnel.

    To control the network, one of the stations acts as a so-called active monitor. The active monitor is selected during ring initialization as the station with the maximum MAC address value. If the active monitor fails, the ring initialization procedure is repeated and a new active monitor is selected. In order for the network to detect the failure of an active monitor, the latter, in a working state, generates a special frame of its presence every 3 seconds. If this frame does not appear on the network for more than 7 seconds, then the remaining stations on the network begin the procedure for electing a new active monitor.

    3.2. A token method for accessing a shared environment.

    In networks with token access method(and these, in addition to Token Ring networks, include FDDI networks, as well as networks close to the 802.4 standard - ArcNet, MAP industrial networks) the right to access the medium is transferred cyclically from station to station along a logical ring.

    In the Token Ring network, a ring is formed by sections of cable connecting neighboring stations. Thus, each station is connected to its predecessor and successor station and can only communicate directly with them. To provide stations with access to the physical environment, a frame of a special format and purpose - a token - circulates around the ring. In the Token Ring network, any station always directly receives data from only one station - the one that is previous in the ring. This station is called nearest active upstream neighbor(data) - Nearest Active Upstream Neighbor, NAUN. The station always transmits data to its nearest neighbor downstream.

    Having received the marker, the station analyzes it and, if it does not have data to transmit, ensures its advancement to the next station. A station that has data to transmit, upon receiving the token, removes it from the ring, which gives it the right to access the physical medium and transmit its data. This station then sends a data frame of the established format into the ring bit by bit. The transmitted data always passes along the ring in one direction from one station to another. The frame is provided with a destination address and a source address.

    All stations on the ring relay the frame bit by bit, like repeaters. If the frame passes through the destination station, then, having recognized its address, this station copies the frame to its internal buffer and inserts an acknowledgment sign into the frame. The station that issued the data frame to the ring, upon receiving it back with confirmation of receipt, removes this frame from the ring and transmits a new token to the network to enable other network stations to transmit data. This access algorithm is used in Token Ring networks with a speed of 4 Mbit/s, described in the 802.5 standard.

    In Fig. 3. The described algorithm for accessing the medium is illustrated by a timing diagram. This shows packet A being transmitted in a 6-station ring from station 1 to the station 3. After passing the destination station 3 In packet A, two signs are set - the sign of address recognition and the sign of copying the packet to a buffer (which is marked in the figure with an asterisk inside the packet). After the package is returned to the station 1 the sender recognizes its packet by its source address and removes the packet from the ring. Installed by the station 3 the signs tell the sending station that the packet reached the addressee and was successfully copied by it into its buffer.

    Rice. 3. Token access principle

    The time of ownership of a shared environment in the Token Ring network is limited token holding time, after which the station must stop transmitting its own data (the current frame is allowed to be completed) and pass the token further along the ring. The station may have time to transmit one or more frames during the marker holding time, depending on the size of the frames and the marker holding time. Typically, the default token hold time is 10 ms, and the maximum frame size is undefined in the 802.5 standard. For 4 Mbps networks it is usually 4 KB, and for 16 Mbps networks it is usually 16 KB. This is due to the fact that during the time the marker is held, the station must have time to transmit at least one frame. At a speed of 4 Mbit/s, 5000 bytes can be transferred in 10 ms, and at a speed of 16 Mbit/s, the corresponding byte can be transferred. The maximum frame sizes were chosen with some reserve.

    16 Mbps Token Ring networks also use a slightly different ring access algorithm, called the Early Token Release. In accordance with it, a station transmits an access token to the next station immediately after the end of transmission of the last bit of the frame, without waiting for the return of this frame along the ring with an acknowledgment bit. In this case, the ring capacity is used more efficiently, since frames from several stations move along the ring simultaneously. However, only one station can generate its frames at any given time - the one that currently owns the access token. At this time, the remaining stations only repeat other people's frames, so that the principle of dividing the ring in time is preserved, only the procedure for transferring ownership of the ring is accelerated.

    For different types of messages, frames transmitted can be assigned different priorities: from 0 (lowest) to 7 (highest). The decision on the priority of a particular frame is made by the transmitting station (the Token Ring protocol receives this parameter through cross-layer interfaces from upper-layer protocols, for example, the application one). A token also always has some level of current priority. A station has the right to seize a token transmitted to it only if the priority of the frame it wants to transmit is higher than (or equal to) the priority of the token. Otherwise, the station must pass the token to the next station in the ring.

    The active monitor is responsible for the presence of a marker, and its only copy, on the network. If the active monitor does not receive a token for a long time (for example, 2.6 s), then it spawns a new token.

    4. FDDI technology.

    Technology FDDI (Fiber Distributed Data Interface)- fiber optic distributed data interface is the first local network technology in which the data transmission medium is a fiber optic cable. Work on the creation of technologies and devices for the use of fiber-optic channels in local networks began in the 80s, shortly after the start of industrial operation of such channels in territorial networks. The ANSI Institute's HZT9.5 problem group developed it between 1986 and 1988. initial versions of the FDDI standard, which provides frame transmission at a speed of 100 Mbit/s over a double fiber-optic ring up to 100 km long.

    4.1. Main characteristics of the technology.

    FDDI technology is largely based on Token Ring technology, developing and improving its basic ideas. The developers of FDDI technology set themselves the following goals as their highest priority:

      increase the bit rate of data transfer to 100 Mbit/s; increase the fault tolerance of the network through standard procedures for restoring it after various types of failures - cable damage, incorrect operation of a node, hub, high levels of interference on the line, etc.; make the most of potential network bandwidth for both asynchronous and synchronous (latency-sensitive) traffic.

    The FDDI network is built on the basis of two fiber optic rings, which form the main and backup data transmission paths between network nodes. Having two rings is the primary way to increase fault tolerance in an FDDI network, and nodes that want to take advantage of this increased reliability potential must be connected to both rings.

    In normal network operation mode, data passes through all nodes and all cable sections of the Primary ring only; this mode is called the Thru- “end-to-end” or “transit”. The Secondary ring is not used in this mode.

    In the event of some type of failure, when part of the primary ring cannot transmit data (for example, a cable break or node failure), the primary ring is combined with the secondary (Fig. 4.), again forming a single ring. This mode of network operation is called Wrap, that is, the "folding" or "folding" of the rings. The collapse operation is performed using FDDI hubs and/or network adapters. To simplify this procedure, data on the primary ring is always transmitted in one direction (in the diagrams this direction is shown counterclockwise), and on the secondary ring in the opposite direction (shown clockwise). Therefore, when a common ring of two rings is formed, the transmitters of the stations still remain connected to the receivers of neighboring stations, which allows information to be correctly transmitted and received by neighboring stations.

    Rice. 4. Reconfiguration of FDDI rings upon failure

    FDDI standards place a lot of emphasis on various procedures that allow you to determine if there is a fault in the network and then make the necessary reconfiguration. The FDDI network can fully restore its functionality in the event of single failures of its elements. In case of multiple failures, the network splits into several unconnected networks. FDDI technology complements the failure detection mechanisms of Token Ring technology with mechanisms for reconfiguring the data transmission path in the network, based on the presence of redundant links provided by the second ring.

    Rings in FDDI networks are considered as a common shared data transmission medium, so a special access method is defined for it. This method is very close to the access method of Token Ring networks and is also called the token ring method.

    The differences in the access method are that the token holding time in the FDDI network is not a constant value, as in the Token Ring network. This time depends on the load on the ring - with a small load it increases, and with large overloads it can decrease to zero. These changes in the access method only affect asynchronous traffic, which is not critical to small delays in frame transmission. For synchronous traffic, the token hold time is still a fixed value. A frame priority mechanism similar to that adopted in Token Ring technology is absent in FDDI technology. The technology developers decided that dividing traffic into 8 priority levels is redundant and it is enough to divide the traffic into two classes - asynchronous and synchronous, the latter of which is always serviced, even when the ring is overloaded.

    Otherwise, frame forwarding between ring stations at the MAC level is fully compliant with Token Ring technology. FDDI stations use an early token release algorithm, similar to Token Ring networks with a speed of 16 Mbps.

    MAC level addresses are in a standard format for IEEE 802 technologies. The FDDI frame format is close to the Token Ring frame format; the main differences are the absence of priority fields. Signs of address recognition, frame copying and errors allow you to preserve the procedures for processing frames available in Token Ring networks by the sending station, intermediate stations and the receiving station.

    In Fig. 5. The structure of the FDDI technology protocols corresponds to the seven-layer OSI model. FDDI defines the physical layer protocol and the media access sublayer (MAC) protocol of the data link layer. Like many other local area network technologies, FDDI technology uses the LLC data link control sublayer protocol defined in the IEEE 802.2 standard. Thus, although FDDI technology was developed and standardized by ANSI and not by IEEE, it fits entirely within the framework of the 802 standards.

    Rice. 5. Structure of FDDI technology protocols

    A distinctive feature of FDDI technology is the station control level - Station Management (SMT). It is the SMT layer that performs all the functions of managing and monitoring all other layers of the FDDI protocol stack. Each node in the FDDI network takes part in managing the ring. Therefore, all nodes exchange special SMT frames to manage the network.

    Fault tolerance of FDDI networks is ensured by protocols of other layers: with the help of the physical layer, network failures for physical reasons, for example, due to a broken cable, are eliminated, and with the help of the MAC layer, logical network failures are eliminated, for example, the loss of the required internal path for transmitting a token and data frames between hub ports .

    4.2. Features of the FDDI access method.

    To transmit synchronous frames, the station always has the right to capture the token upon arrival. In this case, the marker holding time has a predetermined fixed value.

    If the FDDI ring station needs to transmit an asynchronous frame (the type of frame is determined by the protocols of the upper layers), then to determine the possibility capturing the marker during its next Upon arrival, the station must measure the time interval that has passed since the previous arrival of the token. This interval is called token rotation time (TRT). The TRT interval is compared with another value - maximum permissible time for marker rotation around the ring T_0рг. If in Token Ring technology the maximum allowable token rotation time is a fixed value (2.6 s based on 260 stations in the ring), then in FDDI technology stations agree on the value of T_0rg during ring initialization. Each station can offer its own value of T_0rg, as a result, the minimum time of the times proposed by the stations is set for the ring. This allows the needs of the applications running on the stations to be taken into account. Typically, synchronous applications (real-time applications) need to send data to the network in small chunks more often, while asynchronous applications need to access the network less often, but in larger chunks. Preference is given to stations transmitting synchronous traffic.

    Thus, the next time a token arrives to transmit an asynchronous frame, the actual token rotation time TRT is compared with the maximum possible T_0rg. If the ring is not overloaded, then the token arrives before the interval T_0rg expires, that is, TRT< Т_0рг. В этом случае станции разрешается захватить маркер и передать свой кадр (или кадры) в кольцо. Время удержания маркера ТНТ равно разности T_0pr - TRT, и в течение этого времени станция передает в кольцо столько асинхронных кадров, сколько успеет.

    If the ring is overloaded and the marker is late, then the TRT interval will be greater than T_0rg. In this case, the station is not allowed to capture the token for the asynchronous frame. If all stations in the network want to transmit only asynchronous frames, and the token made a revolution around the ring too slowly, then all stations pass the token in repeat mode, the token quickly makes another revolution, and on the next cycle the stations already have the right to capture the token and transmit their frames.

    The FDDI access method for asynchronous traffic is adaptive and handles temporary network congestion well.

    4.3. Fault tolerance of FDDI technology.

    To ensure fault tolerance, the FDDI standard provides for the creation of two fiber optic rings - primary and secondary. The FDDI standard allows two types of connection of stations to the network. Simultaneous connection to the primary and secondary rings is called a dual connection - Dual Attachment, DA. Connecting only to the primary ring is called a single connection - Single Attachment, SA.

    The FDDI standard provides for the presence of end nodes in the network - stations (Station), as well as concentrators (Concentrator). For stations and hubs, any type of connection to the network is acceptable - both single and double. Accordingly, such devices have the appropriate names: SAS (Single Attachment Station), DAS (Dual Attachment Station), SAC (Single Attachment Concentrator) and DAC (Dual Attachment Concentrator).

    Typically, hubs have a double connection, and stations have a single connection, as shown in Fig. 6., although this is not necessary. To make it easier to connect devices correctly to the network, their connectors are marked. Devices with double connections must have connectors of types A and B; connectors M (Master) are available on the hub for a single connection of a station, the corresponding connector of which must be type S (Slave).

    Rice. 6. Connecting Nodes to FDDI Rings

    In the event of a single cable break between dual-connected devices, the FDDI network will be able to continue normal operation by automatically reconfiguring internal frame paths between hub ports (Figure 7.). A cable break twice will result in two isolated FDDI networks. If the cable leading to a station with a single connection breaks, it becomes cut off from the network, and the ring continues to operate due to the reconfiguration of the internal path in the hub - the M port to which this station was connected will be excluded from the general path.

    Rice. 7. Reconfiguring the FDDI network when a wire is broken

    To maintain network functionality during a power outage, dual-connection stations, that is, DAS stations, must be equipped with an Optical Bypass Switch, which creates a bypass path for the light streams when the power they receive from the station disappears.

    Finally, DAS stations or DAC hubs can be connected to two M ports of one or two hubs, creating a tree structure with primary and backup links. By default, port B supports primary communication and port A supports backup communication. This configuration is called a Dual Homing connection.

    Fault tolerance is maintained by constantly monitoring the SMT level of hubs and stations for the time intervals of token and frame circulation, as well as the presence of a physical connection between adjacent ports on the network. There is no dedicated active monitor in an FDDI network - all stations and hubs are equal, and when abnormalities are detected, they begin the process of reinitializing the network and then reconfiguring it.

    Reconfiguration of internal paths in hubs and network adapters is performed by special optical switches that redirect the light beam and have a rather complex design.

    4.4. Comparison of FDDI with Ethernet and Token Ring technologies.

    In table 1. The results of comparison of FDDI technology with Ethernet and Token Ring technologies are presented.

    Table 1. Characteristics of FDDI, Ethernet, Token Ring technologies

    FDDI technology was developed for use in critical areas of networks - on backbone connections between large networks, such as building networks, as well as for connecting high-performance servers to the network. Therefore, the main thing for developers was to ensure high data transfer speeds, fault tolerance at the protocol level and long distances between network nodes. All these goals were achieved. As a result, FDDI technology turned out to be of high quality, but very expensive. Even the emergence of a cheaper twisted pair option has not significantly reduced the cost of connecting a single node to an FDDI network. Therefore, practice has shown that the main area of ​​application of FDDI technology has become the backbone of networks consisting of several buildings, as well as networks on the scale of a large city, that is, the MAN class. The technology turned out to be too expensive to connect client computers and even small servers. And since FDDI equipment has been in production for about 10 years, a significant reduction in its cost cannot be expected.

    As a result, network specialists from the early 90s began to look for ways to create relatively inexpensive and at the same time high-speed technologies that would work just as successfully on all levels of the corporate network, as Ethernet and Token Ring technologies did in the 80s.

    5. Fast Ethernet and 100VG - AnyLAN as a development of Ethernet technology.

    Classic 10 Mbit Ethernet suited most users for about 15 years. However, in the early 90s, its insufficient capacity began to be felt. For computers on Intel 80286 or 80386 processors with ISA (8 MB/s) or EISA (32 MB/s) buses, the Ethernet segment bandwidth was 1/8 or 1/32 of the memory-to-disk channel, and this was in good agreement with the ratio volumes of data processed locally and data transferred over the network. For more powerful client stations with a PCI bus (133 MB/s), this share dropped to 1/133, which was clearly not enough. As a result, many 10Mbps Ethernet segments became overloaded, server responsiveness dropped significantly, and collision rates increased significantly, further reducing usable throughput.

    There is a need to develop a “new” Ethernet, that is, a technology that would be equally cost-effective with a performance of 100 Mbit/s. As a result of searches and research, experts were divided into two camps, which ultimately led to the emergence of two new technologies - Fast Ethernet and l00VG-AnyLAN. They differ in the degree of continuity with classic Ethernet.

    In 1992, a group of network equipment manufacturers, including Ethernet technology leaders such as SynOptics, 3Com and several others, formed the Fast Ethernet Alliance, a non-profit association, to develop a standard for a new technology that would preserve the features of Ethernet technology to the maximum extent possible.

    The second camp was led by Hewlett-Packard and AT&T, which offered to take advantage of the opportunity to address some of the known shortcomings of Ethernet technology. After some time, these companies were joined by IBM, which contributed by proposing to provide some compatibility with Token Ring networks in the new technology.

    At the same time, IEEE Committee 802 formed a research group to study the technical potential of new high-speed technologies. Between late 1992 and late 1993, the IEEE team studied 100-Mbit solutions offered by various vendors. Along with the Fast Ethernet Alliance proposals, the group also reviewed high-speed technology proposed by Hewlett-Packard and AT&T.

    The discussion centered on the issue of maintaining the random CSMA/CD access method. The Fast Ethernet Alliance proposal preserved this method and thereby ensured continuity and consistency between 10 Mbps and 100 Mbps networks. The HP-AT&T coalition, which had the support of significantly fewer vendors in the networking industry than the Fast Ethernet Alliance, proposed an entirely new access method called Demand Priority- priority access on demand. It significantly changed the behavior of nodes on the network, so it could not fit into Ethernet technology and the 802.3 standard, and a new IEEE 802.12 committee was organized to standardize it.

    In the fall of 1995, both technologies became IEEE standards. The IEEE 802.3 committee adopted the Fast Ethernet specification as the 802.3 standard, which is not a standalone standard, but is an addition to the existing 802.3 standard in the form of chapters 21 to 30. The 802.12 committee adopted the l00VG-AnyLAN technology, which uses a new Demand Priority access method and supports two frame formats - Ethernet and Token Ring.

    5.1. Features of 100VG-AnyLAN technology.

    100VG-AnyLAN technology differs from classic Ethernet to a much greater extent than Fast Ethernet. The main differences are listed below.

      Uses a different access method, Demand Priority, which provides a fairer distribution of network bandwidth compared to the CSMA/CD method. In addition, this method supports priority access for synchronous applications. Frames are not transmitted to all network stations, but only to the destination station. The network has a dedicated access arbiter - a concentrator, and this significantly distinguishes this technology from others that use an access algorithm distributed among network stations. Frames of two technologies are supported - Ethernet and Token Ring (it is this circumstance that gives the addition AnyLAN to the name of the technology). Data is transmitted simultaneously over 4 pairs of Category 3 UTP cable. Each pair transmits data at a speed of 25 Mbit/s, for a total of 100 Mbit/s. Unlike Fast Ethernet, there are no collisions in 100VG-AnyLAN networks, so it was possible to use all four pairs of a standard Category 3 cable for transmission. Data encoding uses a 5V/6V code, which provides a signal spectrum in the range of up to 16 MHz (UTP Category 3 bandwidth ) at a data transfer rate of 25 Mbit/s. The Demand Priority access method is based on transferring to the concentrator the functions of an arbiter that solves the problem of access to the shared medium. The 100VG-AnyLAN network consists of a central hub, also called the root, and end nodes and other hubs connected to it (Fig. 8.).

    Rice. 8. 100VG-AnyLAN network

    Three levels of cascading are allowed. Each l00VG-AnyLAN hub and network adapter must be configured to handle either Ethernet frames or Token Ring frames, and both types of frames are not allowed to circulate simultaneously.

    The hub polls the ports cyclically. A station wishing to transmit a packet sends a special low-frequency signal to the hub, requesting transmission of the frame and indicating its priority. The l00VG-AnyLAN network uses two priority levels - low and high. A low priority level corresponds to normal data (file service, print service, etc.), and a high priority level corresponds to time-sensitive data (such as multimedia). Request priorities have static and dynamic components, that is, a station with a low priority level that has not had access to the network for a long time receives a high priority.

    If the network is free, then the hub allows the packet to be transmitted. After analyzing the recipient address in the received packet, the hub automatically sends the packet to the destination station. If the network is busy, the hub puts the received request into a queue, which is processed in accordance with the order in which requests were received and taking into account priorities. If another hub is connected to the port, polling is suspended until the downstream hub completes polling. Stations connected to concentrators of different hierarchy levels do not have advantages in accessing the shared medium, since the decision to grant access is made after all concentrators have polled all their ports.

    The question that remains unclear is how does the hub know which port the destination station is connected to? In all other technologies, the frame was simply transmitted to all stations on the network, and the destination station, having recognized its address, copied the frame to a buffer. To solve this problem, the hub finds out the MAC address of the station at the moment it is physically connected to the network by cable. If in other technologies the physical connection procedure determines the cable connectivity (link test in l0Base-T technology), port type (FDDI technology), port speed (auto-negotiation procedure in Fast Ethernet), then in l00VG-AnyLAN technology the hub when establishing a physical connection finds out the MAC address of the station. And stores it in the MAC address table, similar to the bridge/switch table. The difference between the l00VG-AnyLAN hub and the bridge/switch is that it does not have an internal buffer for storing frames. Therefore, it receives only one frame from network stations, sends it to the destination port, and until this frame is fully received by the destination station, the hub does not accept new frames. So the effect of the shared environment remains. Only network security improves - frames do not reach other people's ports, and they are more difficult to intercept.

    An important feature of the l00VG-AnyLAN technology is the preservation of Ethernet and Token Ring frame formats. Proponents of l00VG-AnyLAN argue that this approach will facilitate internetworking across bridges and routers, and will also provide compatibility with existing network management tools, particularly protocol analyzers.

    Despite many good technical solutions, l00VG-AnyLAN technology has not found many supporters and is significantly inferior in popularity to Fast Ethernet technology. This may have happened due to the fact that the technical capabilities of ATM technology to support different types of traffic are significantly wider than those of l00VG-AnyLAN. Therefore, if it is necessary to ensure fine-grained quality of service, they use (or are going to use) ATM technology. And for networks in which there is no need to maintain quality of service at the level of shared segments, Fast Ethernet technology has turned out to be more common. Moreover, to support applications that are very demanding in terms of data transfer speed, there is Gigabit Ethernet technology, which, while maintaining continuity with Ethernet and Fast Ethernet, provides a data transfer speed of 1000 Mbit/s.

    6. High-speed Gigabit Ethernet technology.

    6.1. General characteristics of the standard.

    Quite quickly after Fast Ethernet products appeared on the market, network integrators and administrators felt certain limitations when building corporate networks. In many cases, servers connected via a 100 Mbit/s channel overloaded network backbones that also operated at 100 Mbit/s speed - FDDI and Fast Ethernet backbones. There was a need for the next level of the speed hierarchy. In 1995, only ATM switches could provide a higher level of speed, and in the absence at that time of convenient means of migrating this technology to local networks (although the LAN Emulation - LANE specification was adopted in early 1995, its practical implementation was ahead) to implement them in Almost no one dared to create a local network. In addition, ATM technology was characterized by a very high level of cost.

    Therefore, the next logical step taken by IEEE was that 5 months after the final adoption of the Fast Ethernet standard in June 1995, the IEEE High Speed ​​Technology Research Group was ordered to consider the possibility of developing an Ethernet standard with an even higher bit rate.

    In the summer of 1996, the creation of the 802.3z group was announced to develop a protocol as similar as possible to Ethernet, but with a bit rate of 1000 Mbps. As with Fast Ethernet, the message was received with great enthusiasm by Ethernet proponents.

    The main reason for the enthusiasm was the prospect of the same smooth transfer of network backbones to. Gigabit Ethernet, just as overloaded Ethernet segments located at lower levels of the network hierarchy were transferred to Fast Ethernet. In addition, experience in transmitting data at gigabit speeds already existed, both in territorial networks (SDH technology) and in local networks - Fiber Channel technology, which is used mainly to connect high-speed peripherals to large computers and transmits data via fiber-optic cable from speed close to gigabit via 8V/10V redundant code.

    Formed to coordinate efforts in this area, the Gigabit Ethernet Alliance from the very beginning included such industry leaders as Bay Networks, Cisco Systems and 3Com. Over the year of its existence, the number of participants in the Gigabit Ethernet Alliance has grown significantly and now numbers more than 100. As the first option for the physical layer, the Fiber Channel technology level was adopted, with its 8V/10V code (as in the case of Fast Ethernet, when it was adopted to speed up work finished physical layer FDDI).

    The first version of the standard was reviewed in January 1997, and the 802.3z standard was finally adopted on June 29, 1998 at a meeting of the IEEE 802.3 committee. Work on implementing Gigabit Ethernet on Category 5 twisted pair cables was transferred to a special committee 802.3ab, which has already considered several options for the draft of this standard, and since July 1998 the project has become fairly stable.

    Without waiting for the standard to be adopted, some companies released the first Gigabit Ethernet equipment on fiber optic cable by the summer of 1997.

    The main idea of ​​the developers of the Gigabit Ethernet standard is to preserve as much as possible the ideas of classical Ethernet technology while achieving a bit speed of 1000 Mbit/s.

    Since when developing a new technology it is natural to expect some technical innovations that follow the general trend of network technology development, it is important to note that Gigabit Ethernet, like its slower counterparts, does not support at the protocol level:

      quality of service; redundant connections; testing the performance of nodes and equipment (in the latter case - with the exception of testing port-to-port communication, as is done for Ethernet l0Base-T and l0Base-F and Fast Ethernet).

    All three of these properties are considered very promising and useful in modern networks, and especially in networks of the near future. Why do the authors of Gigabit Ethernet abandon them?

    Regarding the quality of service, a short answer can be given as follows: “You have power, you don’t need intelligence.” If the network backbone operates at a speed that is several times higher than the average speed of network activity of a client computer and 100 times higher than the average network activity of a server with a 100 Mbit/s network adapter, then in many cases you don’t have to worry about packet delays on the backbone at all. With a small bus load factor of 1000 Mbit/s, the queues in Gigabit Ethernet switches will be small, and the buffering and switching time at this speed is a few or even fractions of microseconds.

    Well, if the backbone is loaded to a sufficient extent, then priority can be given to delay-sensitive or average speed-demanding traffic using the priority technique in switches - the corresponding standards for switches have already been adopted. But it will be possible to use a very simple (almost like Ethernet) technology, the operating principles of which are known to almost all network specialists.

    The main idea of ​​the developers of Gigabit Ethernet technology is that there are and will be many networks in which the high speed of the backbone and the ability to assign priorities to packets in switches will be quite sufficient to ensure the quality of transport service to all network clients. And only in those rare cases when the highway is quite congested and the requirements for quality of service are very stringent, it is necessary to use ATM technology, which, due to its high technical complexity, really guarantees quality of service for all major types of traffic.

    Redundant communications and equipment testing will not be supported by Gigabit Ethernet technology due to the fact that higher-level protocols, such as Spanning Tree, routing protocols, etc., do a good job of these tasks. Therefore, the technology developers decided that the lower layer simply needs to transfer quickly data, and more complex and less frequently encountered tasks (such as traffic prioritization) should be transferred to upper layers.

    What does Gigabit Ethernet technology have in common compared to Ethernet and Fast Ethernet technologies?

      All Ethernet frame formats are preserved. There will still be a half-duplex version of the protocol that supports the CSMA/CD access method, and a full-duplex version that works with switches. The developers of Fast Ethernet also had doubts about preserving the half-duplex version of the protocol, since it is difficult to make the CSMA/CD algorithm work at high speeds. However, the access method remained unchanged in Fast Ethernet technology, and it was decided to remain in the new Gigabit Ethernet technology. Maintaining a low-cost solution for shared environments will allow Gigabit Ethernet to be used in small workgroups with fast servers and workstations. All major types of cables used in Ethernet and Fast Ethernet are supported: fiber optic, twisted pair category 5, coaxial.

    However, in order to maintain the above properties, the developers of Gigabit Ethernet technology had to make changes not only to the physical layer, as was the case with Fast Ethernet, but also to the MAC layer.

    The developers of the Gigabit Ethernet standard faced several difficult problems to solve. One of them was the task of ensuring an acceptable network diameter for half-duplex operating mode. Due to the cable length limitations of CSMA/CD, a shared media version of Gigabit Ethernet would allow a segment length of only 25 meters while keeping the frame size and all CSMA/CD parameters unchanged. Since there are a large number of applications where it is necessary to increase the network diameter to at least 200 meters, it was necessary to somehow solve this problem through minimal changes in Fast Ethernet technology.

    Another major challenge was achieving 1000 Mbps bit rates on major cable types. Even for fiber optics, achieving such a speed presents some problems, since Fiber Channel technology, the physical layer of which was taken as the basis for the fiber optic version of Gigabit Ethernet, provides a data transfer rate of only 800 Mbit/s (the bit rate on the line is in this case approximately 1000 Mbit/s, but with the 8V/10V encoding method the useful bit rate is 25% less than the pulse rate on the line).

    And finally, the most difficult task is supporting twisted pair cables. At first glance, such a task seems insoluble - after all, even for 100-Mbit protocols it was necessary to use rather complex coding methods in order to fit the signal spectrum into the cable bandwidth. However, the successes of coding specialists, which have recently manifested themselves in new modem standards, have shown that the problem has a chance of being solved. In order not to slow down the adoption of the main version of the Gigabit Ethernet standard using fiber and coax, a separate 802.3ab committee was created to develop the Gigabit Ethernet standard over twisted pair Category 5.

    All these tasks were successfully solved.

    6.2. Means of ensuring a network diameter of 200 m on a shared medium.

    To expand the maximum diameter of a Gigabit Ethernet network in half-duplex mode to 200 m, technology developers have taken fairly natural measures based on the known ratio of frame transmission time of the minimum length and double turnaround time.

    The minimum frame size has been increased (excluding preamble) from 64 to 512 bytes or 4096 bt. Accordingly, the double turnaround time could now also be increased to 4095 bt, making a network diameter of about 200 m possible when using a single repeater. At double the signal delay of 10 bt/m, 100 m of fiber optic cables will contribute during double 1000 bt round trips, and if the repeater and network adapters will contribute the same delays as in Fast Ethernet technologies (data for which were given in the previous section) , then the delay of a repeater of 1000 bt and a pair of network adapters of 1000 bt will give a total double turnaround time of 4000 bt, which satisfies the condition for collision recognition. To increase the frame length to the value required in the new technology, the network adapter must expand the data field to a length of 448 bytes as follows: called extension, which is a field filled with prohibited 8B/10B code characters that cannot be mistaken for data codes.

    To reduce the overhead of using too long frames to transmit short receipts, the developers of the standard allowed end nodes to transmit several frames in a row, without transmitting the medium to other stations. This mode is called Burst Mode - exclusive burst mode. A station can transmit several frames in a row with a total length of no more than a bit or 8192 bytes. If a station needs to transmit several small frames, then it may not pad them up to a size of 512 bytes, but transmit in a row until the limit of 8192 bytes is exhausted (this limit includes all bytes of the frame, including the preamble, header, data and checksum) . The limit of 8192 bytes is called BurstLength. If a station begins to transmit a frame and the BurstLength limit is reached in the middle of the frame, then the frame is allowed to be transmitted to the end.

    Increasing the “combined” frame to 8192 bytes somewhat delays access to the shared medium of other stations, but at a speed of 1000 Mbit/s this delay is not so significant.

    7. Conclusion.

    Gigabit Ethernet technology adds a new 1000 Mbps step in the speed hierarchy of the Ethernet family. This stage allows you to effectively build large local networks, in which powerful servers and backbones of the lower levels of the network operate at a speed of 100 Mbit/s, and a Gigabit Ethernet backbone connects them, providing a sufficiently large reserve of bandwidth.

    The developers of Gigabit Ethernet technology have maintained a large degree of continuity with Ethernet and Fast Ethernet technologies. Gigabit Ethernet uses the same frame formats as previous versions of Ethernet, operates in full-duplex and half-duplex modes, supporting the same CSMA/CD access method on the shared media with minimal modifications.

    8. List of used literature.

    Olifer network. Principles, technologies, protocols: textbook for universities /, St. Petersburg: Peter - 672 p.