Token Access Methods

The token passing method is one of the selective deterministic peer access methods. Bus topology networks using token passing are called token bus networks, and ring networks are called token ring networks.

In token bus networks, a token is a frame containing an address field that contains the address of the node that is granted the right to access the transmission medium. After the data frame is transmitted, the transmitting node writes the address of the next node to the token and issues the token on the channel.

Networks of the “token ring” type, being networks with a ring topology, have a sequential configuration: each pair of nodes is connected by a separate channel, and for the network to function, the functioning of all nodes is necessary. In such networks, the token does not contain the address of the node that is allowed to transmit, but only the busy field, which can contain one of two values: “busy” and “free”. When a node with data to send receives a free token, it changes the token state to busy and then sends the token and its data frame onto the channel. The receiving station, having recognized its address in the data frame, reads the data intended for it, but does not change the state of the token. Changes the state of the marker to “free” (after a complete rotation of the marker with a data frame around the ring) of the node that occupied it. The data frame is then removed from the ring. The node cannot reuse the token to send another data frame, but must pass a free token further down the ring and wait for it to be received after one or more rounds.

Peer-to-peer priority systems include priority slot systems, non-collision carrier sense systems, and priority token passing systems.

Priority slot systems are similar to time division multiplex systems, but the allocation of slots is based on node priorities. Criteria for setting priorities can be: previous slot ownership, response time, amount of data transferred, etc.

Systems with carrier sense without collisions (CSMA / CA, Carrier Sense Multiple Access / Collision Avoidance) differ from systems with collision detection by the presence of timers on the nodes that determine the safe moments of transmission. The durations of the timers are set depending on the priorities of the nodes: stations with a higher priority have a shorter timer duration.

Priority token-passing systems prioritize nodes in such a way that the lower the node number, the higher its priority. At the same time, the token contains a reservation field, in which the node about to transmit data writes its priority value. If a node with a higher priority is encountered in the ring, which also has data to transmit, this node will write its priority value in the reservation field, which will override the previous request (keeping the old value of the reservation field in its memory). If the token received by the node contains the value of the priority of this node in the reservation field, this node can transmit data. After the token has been looped around and released, the transmitting node must restore the value of the reservation field stored in memory in the token.


Network adapters

Network adapters are designed to interface network devices with the transmission medium in accordance with the accepted rules for the exchange of information. The network device can be a user’s computer, a network server, a workstation, and so on. The set of functions performed by the network adapter depends on the specific network protocol. Due to the fact that the network adapter is physically and logically located between the device and the network environment, its functions can be divided into the functions of pairing with a network device and the functions of exchanging with the network.

Network functions can be redistributed between the adapter and the computer. The more functions a computer performs, the simpler the functional diagram of the adapter. The main network functions of the adapter include:

– Galvanic isolation with coaxial cable or twisted pair. Most often, pulse transformers are used for these purposes. In an Ethernet network (due to the fact that DC analysis is used to determine the conflict situation), this scheme is somewhat more complicated. Sometimes optocouplers are used for decoupling.

– Coding and decoding of signals. The most commonly used self-synchronizing Manchester code;

– Identification of your address in the received packet. The physical address of the adapter can be determined by setting switches, stored in a special register, or flashed into PROM.

– Conversion of parallel code to serial code during transmission and inverse conversion during reception. In the simplest case, shift registers with parallel input and serial output are used for this purpose. This function can also be implemented programmatically.

– Intermediate storage of data and service information in the buffer. Using a buffer allows you to assign network control functions to the adapter. If there is a buffer, the computer may not keep track of the moment of data transfer.

– Identification of conflict situations and control of the state of the network. This function is most important in networks with a “bus” topology and with a random method of access to the transmission medium. Possible conflicts must be resolved by the adapter itself.

– Checksum calculation. The most common way to determine the checksum is to calculate using a shift register through a modulo 2 adder with feedback from some bits. The places where feedbacks are switched on are determined by the selected polynomial.

– Coordination of the data transfer rates of the computer to or from the adapter with the exchange rate over the network. With a low exchange rate on the network, the computer will have to wait for the moment of transmission. At high speed, he may not have time to send his data. The adapter with the help of a buffer copes with this task.


The main function of the hub is to repeat each received signal on all (for Ethernet) or on some ports. Accordingly, the most common name for this kind of device is a repeater. For 10BaseT Ethernet with a “star” topology, the term “hub” (hub) is traditionally used. All these terms are equal and interchangeable. The hub operates at the physical layer of the OSI model (because it deals with electrical signals, their levels, polarities, etc.) and at the data link layer (Ethernet repeaters, for example, can detect collisions), but does not perform any frame analysis.

Each hub port connects either end nodes, or other hubs or other network devices, or (for example, in 10Base2 Ethernet) entire physical cable segments.

The hub is used primarily to increase the diameter of the network and the number of connected nodes. Core LAN technologies allow multiple hubs on the same network, but under certain conditions. For example, between any pair of nodes in an Ethernet network there can be no more than four repeaters (respectively, the maximum path includes five segments, and nodes can only connect to three of them – the so-called “5-4-3” rule), the signal propagation delay between by any pair of nodes should not exceed 25 µs.

A network built on hubs forms a single collision domain. Each packet issued by any node must reach all other nodes, and during this time no other node can transmit data.

As the number of nodes in the network increases, the frequency of collisions increases, and the useful throughput decreases rapidly. For Ethernet technologies, a load of 40-50% of the maximum bandwidth is acceptable. That is, as long as the total amount of transmitted data does not exceed 40-50% of 10 Mbps (for Ethernet), the network works fine, and when the load increases, the useful bandwidth drops rapidly. The acceptable number of nodes in the network, if not multimedia data is transmitted, is about 30.

Structurally, hubs are available in one of three options: standalone (standalone), stack, modular, modular-stack.

Standalone and stacking hubs are designed as a separate chassis with a fixed number and type of ports (usually up to 24). All ports generally support the same media. Sometimes a port is allocated for connecting to a backbone or cascading. The stacking hub also has a special port for combining several such hubs into a single device – a stack of hubs. As a rule, up to 8 hubs participate in a stack (sometimes more). A modular hub consists of a common chassis and modules connected to it. Different modules may have different numbers of ports and support different types of physical media. As a rule, connecting and disconnecting a module does not require turning off the hub. Typically, modular hubs are equipped with an optional SNMP management module, redundant power supplies, and ventilation devices. Modular stacking hubs are modular hubs for a small number of modules with an additional port for stacking them.

Hubs can have multiple internal buses, forming multiple shared segments. Different hub ports are associated (usually not by hardware, but by software control) with different segments. The segments themselves do not communicate with each other in any way. Such a hub is called multi-segment, and its ability to programmatically assign ports to segments is called configuration switching. When it is necessary to connect these segments, bridges, switches or routers are used. The development of multi-segment hubs has become switching hubs that have an internal bridge connecting the segments.

Hubs with an optional fiber optic port can be used to connect remote teams to the network. There are three types of implementation of such a port: a microtransceiver inserted into the slide-in expansion slot, a hinged microtransceiver built into the AUI connector jack, and a permanent optical port. Optical hubs are used as the central device of a distributed network with a large number of individual remote workstations and small workgroups. The ports of such a hub act as amplifiers and perform full packet regeneration. There are hubs with a fixed number of connected segments, but some types of hubs have a modular design, which allows you to flexibly adapt to existing conditions. Most often, hubs and repeaters are self-contained units with separate power supply.


A bridge is a device that is used to communicate between local networks. Bridges transfer frames from one network to another. Bridges have a lot more functionality than hubs. Bridges are smart enough that they don’t repeat network noise, errors, or broken frames. For each connected network, the bridge is a subscriber (network node).

The bridge receives the frame, stores it in its buffer memory, parses the frame’s destination address. If the frame belongs to the network from which it was received, the bridge should not respond to this frame. If the frame needs to be sent to another network, it is sent there. Access to the transmission medium is carried out with the same rules as for a normal node.

By belonging to different types of networks, global and local bridges are distinguished.

According to the algorithm of operation, bridges are divided into bridges with “source routing” (Source Routing) and “transparent” bridges.

The “source routing” algorithm belongs to IBM and is designed to describe the passage of frames through bridges in Token Ring networks. On this network, bridges may not contain an address database. They calculate the path of the frame based on the information stored in the fields of the frame itself. A network node that needs to communicate with another node sends a special explorer frame to it. This frame contains a special identifier intended for bridges, with a “routing from source” algorithm. Upon receiving such a frame, the bridge records information about the direction from which the frame was received and its own name in a special field in the frame called the Rout Information Field. After that, the bridge transmits the frame in all directions available to it, except for the one from which the frame was received. As a result, there are many copies of the same explorer frame. The node that should receive the packet receives several copies of the frame at once – one for each possible route. Moreover, each received explorer frame contains records about the bridges through which it passed. After receiving all explorer frames, the node chooses one of the possible routes and sends a response to the sending node. As a rule, the route by which the first explorer frame arrived is chosen, since it is probably the fastest (the time it takes for the explorer frame to pass is minimal). The response contains full information about the route along which all other frames should be sent. After the route is determined, the sending node uses this route for quite a long time when sending packets to the recipient.

The term “transparent” bridges covers a large group of devices. If we consider the devices of this group in terms of the tasks they solve, then it can be divided into three main subgroups:

– Transparent bridges (transparent bridges) unite networks with common protocols of the physical and link layers of the OSI model.

– Translating bridges (translating bridge) unite networks with different protocols of the channel and physical layers.

– Encapsulating bridges connect networks with common data link and physical layer protocols through networks with other protocols.

Transparent bridges are the most common. For these bridges, the local network is represented as a set of MAC addresses of devices operating on the network. Bridges look at these addresses to decide whether to forward the frame. For analysis, the frame is written to the bridge’s internal buffer. Bridges do not work with information related to the network layer. They do not know anything about the topology of the connections between segments or networks. Therefore, bridges are completely transparent to protocols from the network layer upwards. Bridges allow you to combine multiple local networks into a single logical network. Connected local networks form logical segments of such a network.

When a frame passes through a transparent bridge, it is regenerated and translated from one port to another. Transparent bridges take into account both the source address and the destination address, which are taken from the received LAN frames. The source address is required by the bridge to automatically build a database of device addresses. This database is also called the MAC table, it matches the station address to a specific bridge port.

All bridge ports operate in what is known as “promiscuous” frame capture mode. This mode is characterized by the fact that all incoming frames are stored in the buffer memory of the bridge. In this mode, the bridge monitors all traffic that is transmitted on segments connected to it. The bridge uses the frames passing through it to learn the topology of the network.

Bridge basic principles: learning, filtering, transmission and broadcasting. After receiving the frames, the bridge checks their integrity using checksums. Incorrect frames are discarded. After a successful check, the bridge compares the sender’s address with the addresses in the database. If the sender’s address has not yet been entered into the database, then it is added to it. As a result, the bridge knows the addresses of devices on the network and thus the learning process takes place. Due to the bridge’s ability to learn, new devices can appear on the network without reconfiguring the bridge.

In addition to the sender address, the bridge also analyzes the recipient address. This analysis is necessary to decide on the further path of the frame transmission. The bridge compares the destination address of the frame with the addresses stored in the database. If the destination address belongs to the same segment as the source address, then the bridge does not pass such a frame to another segment, or, in other words, “filters” this frame. This operation helps to protect segments from unnecessary traffic. If the destination address is present in the database and belongs to another segment, then the bridge determines which of its ports is associated with this segment. After gaining access to the transmission medium of this segment, the bridge transfers the frame to it. This process is sometimes called promotion.

If a destination address is not recorded in the database or is a broadcast address, the bridge forwards the frame on all its ports except the port that received the frame. This process is called broadcasting or network flooding. Broadcasting guarantees that the frame will be delivered to all segments of the network and, of course, to the recipient.

Because workstations can migrate from one segment to another, bridges must periodically update the contents of their databases. In this regard, all records in the database are divided into two types – static and dynamic. Each dynamic entry has an associated inactivity timer. When a frame is received with a sender address that matches a specific entry in the address database, the inactivity timer is reset. If any station does not send frames for a long time, the inactivity timer, after a certain period of time, removes this address from the database. Determining the optimal time of inactivity can be quite a difficult task.

Bridges can also support additional services. They provide customizable filters, improved data protection, and class-by-class handling of frames.

Custom filters allow the network administrator to filter based on any component of a frame, such as upper layer protocol, source and destination addresses, frame type, or even its informational part. Customizable filters allow you to effectively separate the network, or block email for specific addresses. Address-based blocking is the backbone of network security. By disallowing frame transmission, a system administrator can restrict access to certain network resources for certain sender and recipient addresses. Custom filters can prevent certain protocol packets from passing through certain interfaces. By applying both of these methods at the same time, the network administrator can isolate individual devices or network segments from frames of a certain type.

Class processing allows administrators to prioritize frames passing through the network. The administrator can throttle throughput by routing frames to different processing queues. Class based service is very efficient on low speed links and for applications with varying delay requirements.

Bridges, both transparent and source-routed, operate on the MAC sublayer of the link layer of the OSI model. It should be noted that source routing means, in a general sense, a method (algorithm) for searching for a subscriber in the network.


The switch is functionally a high-speed multi-port bridge capable of simultaneously connecting multiple nodes at the maximum speed provided by the transmission medium. Often switches are used for segmentation – reducing the size of collision domains. In fact, collisions are converted into frame queues inside the switch. The limiting case of segmentation – microsegmentation – is achieved when a single node is connected to each port of the switch, then the collision domain consists only of the node and switch port (duplex mode allows you to completely eliminate collisions during microsegmentation).

Switches operate in one of three modes:

1. Buffered switching (store-and-forward): each frame is completely replaced in the buffer memory of the switch, then its checksum is checked, the destination port is determined, the port is expected to be free, and the frame is transmitted. This method guarantees the filtering of erroneous and collision-cut frames. The main disadvantage is a large transmission delay, reaching several milliseconds per frame.

2. Switching “on the fly” (cut-through): the frame is transmitted to the destination port immediately after receiving the destination address (in Ethernet, the first 6 bytes of the frame header). If the destination port is busy at this point, the switch processes the packet in buffered mode. On-the-fly switching introduces the lowest possible delay – 11.2 µs for Ethernet, however, all frames are transmitted – including erroneous ones.

3. Fragment-free switching: the switch buffers the first 64 bytes of the frame, and if the frame is no longer than 64 bytes, then the switch processes it in buffered mode. If the frame is long, then it is transmitted to the destination port, as in on-the-fly mode.

Most low-end and mid-range switches only implement buffered switching mode. On-the-fly switching is typical for backbone high-speed switches, where the minimum transmission delay is much more important than the propagation of frames with errors. Top-level switches sometimes use adaptive switching technology: first, all ports operate in on-the-fly mode, then ports through which many frames with errors arrive are switched to fragmentless mode, and if this does not help filter out erroneous frames (in the case of long packets with errors ), then such ports are put into buffered switching mode.

To achieve high performance (required to serve all ports simultaneously), each port of the switch is usually provided with a separate processor, usually an application-specific integrated circuit (ASIC) optimized for switching functions. The central node connecting the processors of individual ports is built on the basis of one of three schemes (combined options are also used):

– switching matrix,

– shared multi-input memory,

– common bus.

The switching matrix is a combinational circuit consisting, for example, of controlled switches, depending on a given destination port number, connecting its input to one of its outputs. Thus, the switch fabric physically switches communication links between ports, providing the fastest way for port processors to communicate. The main drawback of the switching matrix is the limited number of ports in such switches (the complexity of the matrix circuit increases in proportion to the square of the number of ports). In addition, when using a switching matrix, it is necessary that each port can independently buffer incoming frames, otherwise frames may be lost while waiting for the output port to become free.

Shared multi-entry memory allows you to increase the number of switch ports without complicating its circuitry. Port processors use shared memory to communicate with each other. Switching the input and output of memory is carried out by a special control unit, which organizes data in turn for each output port. At the request of the input port, the control unit connects it to the input of the required queue, and the port processor writes frame data to it. When full frames appear in the queues, the control unit alternately connects the output ports to its queues to read frames for transmission.

Port processors in switches with a common bus are equipped, on the one hand, with bus access arbitration modules, and on the other hand, with filters that select the cells transmitted on the bus intended for a given port. The frame in such switches is transmitted over the bus not entirely, but in small parts – cells, which allows (together with a high data transfer rate on the bus) to implement a pseudo-parallel mode of frame transmission between ports. For blocking-free operation, the bandwidth of the common bus must be at least half the sum of the bandwidths of all ports.

Complex switches tend to combine these architectures. For example, modular hubs tend to use a common bus to connect modules, while inside each module (usually no more than 12 ports) the fastest architecture is implemented – a switching matrix.

Depending on the design option, there are:

– autonomous (standalone) switches,

– stack switches,

– modular switches based on the chassis.

The first two options have a fixed number (usually 8,16,24, rarely up to 30) and a type of ports that cannot be changed. Standalone switches are used at the workgroup level. Stack switches differ from stand-alone ones by the presence of an additional (stack) interface that allows you to combine several of these switches into a system that works as a single switch – a stack of switches. As a rule, the number of switches in a stack does not exceed four (the throughput of a stack interface is in the range of 200-400 Mbps).

Stack switches are used in networks where the capacity of a stand-alone switch is no longer sufficient (the number of nodes is more than 30), and the installation of a much more expensive modular switch is not justified. Modular chassis-based switches allow you to connect the required number of different types of modules, often with the ability to replace them without turning off the switch (hot swap). The number of ports in such switches can exceed 100. As a rule, modular switches are used as trunk switches.

In Ethernet / Fast Ethernet networks, an intermediate type network device is often used – a switching hub, which is a two-segment hub (one segment is Ethernet, the other is Fast Ethernet), the segments of which are connected by a two-port bridge. As a result, all Ethernet stations connected to it form one collision domain, and all Fast Ethernet stations form a second collision domain. Connections between stations of different segments are served by a bridge. Such devices are usually cheaper than full-fledged switches. They are used most effectively with most Ethernet stations and high-speed (Fast Ethernet) connection of one or two servers. Since all high-speed nodes form one collision domain, as their number increases, network performance will decrease.


A firewall can be defined as a set of hardware and software designed to prevent outside access to a network and control the data that enters and leaves the network. Firewalls have gained general acceptance since the early 1990s, largely due to the rapid development of the Internet. Since that time, a large number of products called firewalls have been developed and used in practice. Firewalls can protect a corporate network from UA from the Internet or another corporate network.

The firewall is installed at the edge of the protected network, and filters all incoming and outgoing data, passing only allowed packets and preventing attempts to penetrate the network. A properly configured firewall will allow (or not allow) a specific packet and allow (or not allow) to organize a specific communication session in accordance with the established rules. For firewalls to work effectively, three conditions must be met:

– all traffic must pass through one point;

– the firewall must monitor and log all passing traffic;

– the firewall itself must be impregnable for external attacks;

If we consider the operation of the firewall in relation to the OSI model, then they can be conditionally divided into the following categories:

– firewalls with packet filtering (packet-filtering firewall);

– circuit-level gateways;

– application-level gateways (application-level gateway);

– stateful inspection firewalls.

The most widely used packet filtering firewalls are implemented on routers and configured to filter incoming and outgoing packets.

Packet filters look at the fields of incoming IP packets and then let them pass or drop them depending on, for example, source and destination IP addresses, TCP and UDP source and destination port numbers, and other parameters. The filters compare the received information with a list of filtering rules to decide whether to allow or deny the transmission of packets. The list of filtering rules contains allowed IP addresses, protocol types, sender port numbers, and recipient port numbers. The packet filter only checks the header of the packet, not the data inside it.

Packet filtering technology is the “cheapest” way to implement a firewall. Such a firewall can check packets of various protocols, and at a high speed, since it simply looks at information about the packet (packet header) in order to decide on the future. The filter analyzes packets at the network level and is independent of the application being used. It is this “freedom” that ensures good performance.

The disadvantages of such a firewall include the ability to identify packets when simulating IP addresses and the inability to track a specific network address.

Imitation means that if you use the IP address of a legitimate user, you can freely penetrate the protected network and gain access to its resources. The packet filter will pass this packet to the network, regardless of where the session originated from and who is hiding behind the address. There is an advanced version of packet filtering called dynamic packet filtering. This analyzes the address from which an access attempt is made (possibly, then it will be recognized as unauthorized), and pings this address to check it. As it is easy to understand, if the internal IP address is used from the outside, then ping will not reach the sender of the packet. In this case, the access attempt will be rejected and the session will be established. Currently, packet filters have taken a fairly prominent place in the network security system. They are not suitable for protecting the external network. But because of their high performance and low cost, these filters are well suited for intranet security. An organization can use them to divide the network into segments and install a firewall in each of the segments, thus separating, for example, accounting from the sales department.


Token Ring Technology

Token Ring technology (marker ring) was developed by IBM in the late 1970s. The IEEE 802.5 specifications practically repeat the proprietary specifications, differing only in some details (for example, IEEE 802.5 does not specify the transmission medium and network topology, while the proprietary standard defines a twisted-coar as a medium and a star as a physical topology). Token Ring networks can operate at one of two bit rates: 4 Mbps (IEEE 802.5) or 16 Mbps (IEEE 802.5r). Only stations operating at the same speed can be present in one ring.

Token Ring defines a logical “ring” topology: each station is connected to two neighbors. Physically, the stations are connected into a star-shaped network, in the center of which is a multi-station access unit (MSAU, Multi-Station Access Unit), which is essentially a repeater. As a rule, MSAU is able to exclude an idle station from the ring (a bypass relay is used for this). MSAUs also have separate connectors to combine multiple MSAUs into one large ring. The maximum number of stations in a ring is 250 (IEEE 802.5), 260 (IBM Token Ring, STP cable), and 72 (IBM Token Ring, UTP cable).

The maximum length of a Token Ring is 4000m.

In the late 1990s, IBM developed a new version of Token Ring technology – High Speed Token Ring (HSTR), which supports speeds of 100 and 155 Mbps. A version of Token Ring with a speed of 1 Gb / s is being developed.

Token Access Method

Token Ring is the most common token passing LAN technology. In such networks, a special block of data circulates (transmitted by stations to each other in a certain order) – a token. The station that accepted the token has the right to transmit its data. To do this, it changes one bit in the marker (“the marker is busy”), adds its data to it and transmits it to the network (to the next station). Stations transmit such a frame further along the ring until they reach the recipient, who will copy the data from it and transmit it further. When the sender receives its full circle data frame, it discards it and either sends a new data frame (if the maximum token ownership time has not expired), or changes the token’s busy bit to “free” and passes the token further along the ring.

During the entire time of possession of the token, before and after the transmission of its frame, the station must issue a fill sequence (fill sequence) – an arbitrary sequence of 0 and 1. This is done to maintain synchronization and control for ring breakage.

The main mode of operation of the adapter is repetition: the transmitter outputs bit by bit the data received by the receiver. When the station has a frame to transmit and a free token is received, the station switches to transmit mode, while the bit stream arriving through the receiver is analyzed into service frames and either (if a service frame is detected) an interrupt is initiated (stopping transmission of its frame and issuing an interrupt frame), or the received data is discarded.

In 4 Mbit/s Token Ring networks, a station released the token only after returning its data frame. 16 Mbps Token Ring networks use the Early Token Release algorithm: the token is transferred to the ring immediately after the end of the data frame transmission. In this case, several data frames are simultaneously transmitted along the ring, but only one station, which owns the token at that moment, can generate them at a time.

The active monitor (Active Monitor, AM) monitors the correct operation of the network, which is selected during ring initialization as the station with the maximum MAC address. If the active monitor fails, a new monitor is elected (all stations in the network, except for the active monitor, are considered standby monitors). The main function of the active monitor is to control the presence of a single marker in the ring. The monitor releases a marker into the ring and removes frames that have passed more than one revolution around the ring. To inform other stations about itself, the active monitor periodically transmits an AMP service frame. If the marker does not return to the active monitor within some time (sufficient for the marker to rotate around the ring), the marker is considered lost, and the active monitor generates a new marker.

The frame transmission mode is affected by the maximum time intervals defined in the standard, which are monitored by special timers in network adapters (default values are given, the network administrator can change them):

Be First to Comment

Leave a Reply

Your email address will not be published.