Bandwidth f. What is Bandwidth

Bandwidth is usually defined as the difference between the upper and lower boundary frequencies of the frequency response section. Bandwidth is expressed in units of frequency (for example, Hz). Increasing the bandwidth allows more information to be transmitted.

Uneven frequency response

The unevenness of the frequency response characterizes the degree of deviation from a straight line parallel to the frequency axis. The unevenness of the frequency response is expressed in decibels.

Reducing the frequency response unevenness in the band improves the reproduction of the transmitted signal shape.

    Ideal and real models of information transmission channel.

IDEAL CHANNEL

Model ideal channel

Deterministic signal

REAL CHANNEL

IN real channels

Channel output signal

x(t) = μ(t)∙s(t-T)+w(t),

Additive interference

Multiplicative noise

    The concept of sampling and quantization of signals.

The transformation of a continuous information set of analog signals into a discrete set is called sampling .

Analog signal is a signal in which each of the representing parameters is described by a function of time and a continuous set of possible values.

Discrete signal is a signal that takes only a finite number of values.

Quantization - dividing a range of values ​​of a continuous or discrete quantity into a finite number of intervals.

Not to be confused quantization With sampling (and, accordingly, the quantization step with the sampling frequency). At sampling a time-varying quantity (signal) is measured at a given frequency (sampling frequency), thus sampling splits the signal into a time component (horizontally in the graph). Quantization It also brings the signal to the specified values, that is, it divides it by signal level (on the graph - vertically). A signal to which sampling and quantization are applied is called digital.

Fig. 1 – quantized signal.

Fig. 2 – non-quantized signal with discrete time.

Digital signal - a data signal in which each of the representing parameters is described by a discrete-time function and a finite set of possible values.

Fig3. – digital signal.

    Classification of signal sampling methods.

Used time sampling And by level .

TIME SAMPLING

Time sampling

Uniform sampling

Kotelnikov's theorem

Adaptive sampling

Due to the fact that the change in function is different at different times, the sampling step can be different, ensuring a uniform error at each step.

DISCRETION BY LEVEL

Discretization of function values (level) is called quantization . The quantization operation comes down to the fact that instead of a given instantaneous message value, the nearest values ​​are transmitted along an established scale of discrete levels.

Discrete values ​​on a level scale are most often chosen uniformly. When quantizing it is introduced error (distortion) because the true values ​​of the function are replaced by rounded values. The magnitude of this error does not exceed half the quantization step and can be reduced to an acceptable value. The error is a random function and appears at the output as additional noise ("quantization noise") , superimposed on the transmitted message.

TIME AND LEVEL DISCRETIZATION

Allows you to convert a continuous message into a discrete one (analog signal in digital form ), which can then be encoded and transmitted using discrete (digital) technology.

DISCRETE FOURIER TRANSFORM

The sampled signal can be thought of as the result of multiplying the original continuous signal by a series of unit pulses.

    Criteria for assessing the accuracy of signal sampling.

Difference between true signal values x ( t ) and approaching P ( t ) , or reproducing V ( t ) - function, represents the current sampling error or reconstruction, respectively:

The choice of criterion for assessing the sampling (and reconstruction) error of the signal is carried out by the recipient of the information and depends on the intended use of the sampled signal and the capabilities of the hardware (software) implementation. Error assessment can be carried out for both individual and multiple signal implementations.

More often than others, deviation of a reproducible function V ( t ) from the signal x ( t ) on the sampling interval Δt i = t i t i –1 assessed by the following criteria.

a) Criterion for the largest deviation:

Where ε ( t ) – current error, determined by expression (1).

b) Mean square criterion, determined by the following expression:

Where ε ( t ) current error (1).

The bar above means averaging over the probability set,

c) Integral criterion as a measure of deviation x ( t ) from V ( t ) has the form:

d) The probabilistic criterion is determined by the relation:

Where ε 0 – permissible error value;

R 0 – acceptable probability that the error does not exceed the value ε 0 .

    Uniform sampling.

Time sampling is performed by taking samples of the function at certain discrete times. As a result, a continuous function is replaced by a collection of instantaneous values.

Uniform sampling

The reference moments are chosen uniformly on the time axis. Kotelnikov's theorem – if an analog signal has a spectrum limited in width, then it can be restored uniquely and without loss from its discrete samples taken with a frequency strictly greater than twice the upper frequency.

    The concept of information coding.

Code is a set of conventions (or signals) for recording (or conveying) some predefined concepts.

Encoding information is the process of forming a certain representation of information. In a narrower sense, the term " coding“is often understood as a transition from one form of information representation to another, more convenient for storage, transmission or processing.

Usually, each image when encoding (sometimes called encryption) is represented by a separate sign.

Sign is an element of a finite set of elements distinct from each other.

A sign together with its meaning is called symbol .

A set of characters in which their order is determined is called alphabet . There are many alphabets:

alphabet of Cyrillic letters (A, B, V, G, D, E, ...)

alphabet of Latin letters (A, B, C, D, E, F,...)

alphabet of decimal digits(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)

alphabet of zodiac signs (pictures of zodiac signs), etc.

Especially Sets consisting of only two characters are of great importance: a pair of characters (+, -), a pair of numbers (0, 1), a pair of answers (yes, no)

    Block diagram of the information transmission channel.

Rice. 1.3. Functional diagram of the discrete transmission system

messages

    The concept of a real and ideal information transmission channel.

IDEAL CHANNEL

Model ideal channel used when the presence of interference can be ignored. When using this model, the output signal is deterministic, the power and bandwidth of the signals are limited.

Deterministic signal precisely determined at any given time.

Bandwidth is the difference between the maximum and minimum frequencies of the signal.

REAL CHANNEL

IN real channels There are always errors when transmitting messages. Errors lead to a decrease in channel capacity and loss of information. The probability of errors occurring is largely determined by signal distortion and the influence of interference.

Channel output signal can be written in the following form:

x(t) = μ(t)∙s(t-T)+w(t),

where s(t) is the signal at the channel input, w(t) is additive noise, μ(t) is multiplicative noise, T is signal delay.

Additive interference – interference added to the signal when it is transmitted over an information channel.

Additive noise is caused by fluctuation phenomena (random fluctuations in current and voltage) associated with thermal processes in wires, resistors, transistors and other circuit elements, interference due to atmospheric phenomena (lightning discharges, etc.) and industrial processes (operation of industrial installations, other communication lines, etc.).

Multiplicative noise – interference multiplied with the signal.

Multiplicative interference is caused by random changes in the channel transmission coefficient due to changes in the characteristics of the medium in which the signals propagate, and by the gain of the circuits when the supply voltage changes, due to signal fading as a result of interference and various attenuation of signals during multipath propagation of radio waves. Multiplicative interference also includes “quantum noise” of lasers used in optical systems for transmitting and processing information. The “quantum noise” of a laser is caused by the discrete nature of the light radiation and depends on the intensity of the radiation, i.e., on the useful signal itself.

    Gaussian channel and its varieties.

GAUSSIAN CHANNEL

The main assumptions when constructing such a model are as follows:

– the transmission coefficient and delay time of signals in the channel do not depend on time and are deterministic quantities known at the place where the signals are received;

– there is additive fluctuation interference in the channel – Gaussian "white noise" (Gaussian process, characterized by uniform spectral density, normally distributed amplitude value and additive method of influencing the signal).

The Gaussian channel is used as a model of real wire communication channels and single-beam channels without fading or with slow fading. In this case, fading is an uncontrolled random change in signal amplitude. This model allows one to analyze the amplitude and phase distortions of signals and the influence of fluctuation interference.

GAUSSIAN CHANNEL WITH UNDETERMINED SIGNAL PHASE

In this model, the delay time of the signal in the channel is considered as a random variable, so the phase of the output signal is also random. To analyze the output signals of a channel, it is necessary to know the distribution law of the delay time or signal phase.

GAUSSIAN SINGLE BEAM CHANNEL WITH FADING

GAUSSIAN MULTIPATH CHANNEL WITH FADING

This model describes radio channels in which the propagation of signals from the transmitter to the receiver occurs along various "channels" - ways. The duration of the signals and the transmission coefficients of the various “channels” are unequal and random. The received signal is formed as a result of the interference of signals arriving along different paths. In general, the frequency and phase characteristics of a channel depend on time and frequency.

GAUSSIAN MULTIPATH CHANNEL WITH FADING AND ADDITIVE LOCAL INTERFERENCE

In this model, along with fluctuation interference, various types of concentrated interference are also taken into account. It is the most general and quite fully reflects the properties of many real channels. However, its use creates complexity and labor-intensive analysis tasks, as well as the need to collect and process a large volume of initial statistical data.

Currently, to solve problems of analysis of continuous and discrete channels, as a rule, a Gaussian channel model and a Gaussian single-beam channel model with fading are used.

    Methodology for generating the Shannon-Fenno code, its advantages and disadvantages.

SHANNON-FENNO ALGORITHM

It consists in the fact that the letters of the alphabet, arranged in descending order, are divided into two groups with the possible equal total (in each group) probability. For the first group of symbols in the first place of the combination, put 0 as the first leftmost position of the code words, and the elements of the second group - 1. Next, each group is again divided into subgroups according to the same rule of approximately equal probabilities, and in each subgroup the second left position of the code word is filled (0,1).The process is repeated until all elements of the alphabet are encoded.

ADVANTAGES

– ease of implementation and, as a consequence, high speed of encoding/decoding/

– it is convenient to encode information in the form of a sequence of zeros and ones, if you imagine these values ​​as two possible stable states of an electronic element: 0 – absence of an electrical signal; 1 – presence of an electrical signal. In addition, in technology it is easier to deal with a large number of simple elements than with a small number of complex ones.

– According to the Sh-F method, it turns out that the more likely a message is, the faster it forms an independent group and the shorter the code it will be represented. This circumstance ensures the high efficiency of the Sh-F code.

FLAWS

–To decode a received message, the code table must be sent along with the message, which will increase the data volume of the final message.

–In the case of an ordinary code (in which all symbols are used to transmit information), if an error occurs in the code, it will be impossible to decipher it. This is because code combinations have different lengths, and in the event of an error (replacing a character 1 with a 0, and vice versa), one or more code combinations in the message may not match the characters in the code table.

–Shannon–Fano coding is a fairly old compression method, and today it is not of particular practical interest.

    Entropy of a source of independent messages.

the total entropy of discrete sources of messages X and Y is equal to the sum of the entropies of the sources.

H nz (X,Y) = H(X) + H(Y), where H nz (X,Y) is the total entropy of independent systems, H(X) is the entropy of system X, H(Y) is the entropy of system Y.

    Entropy of the source of dependent messages.

the amount of information about source X is defined as the decrease in the entropy of source X as a result of obtaining information about source Y.

H z (X,Y) = H(X) + H(Y|X), where H z (X,Y) is the total entropy of dependent systems, H(X) is the entropy of system X, H(Y|X) – conditional entropy of the system Y relative to X.

The entropy of dependent systems is less than the entropy of independent systems. If the entropies are equal, then there is a special case of dependent systems - the systems are independent.

H z (X,Y)<= H нз (X,Y) (<= – меньше или равно).

    Properties of entropy. Hartley measure.

Entropy is a quantity that is always positive and finite, because the probability value is in the range from 0 to 1. H(a) = -Logk P(a) 2. Additivity is a property according to which the amount of information contained in several independent messages is equal to the sum of the amount information contained in each of them. 3. Entropy is equal to 0 if the probability of one of the states of the information source is equal to 1, and thus the state of the source is completely determined (the probabilities of the remaining states of the source are equal to zero, since the sum of the probabilities must be equal to 1). Hartley's formula is defined: where I is the amount of information, bits.

    The concept of source performance and information transfer speed.

INFORMATION SOURCE PERFORMANCE

When a message source is operating, individual signals appear at time intervals, which in general may not be constant. However, if there is a certain average duration for the source to create one signal, then the entropy of the source per unit time is called the productivity of the information source.

INFORMATION TRANSMISSION RATE

This is the data transfer rate expressed in the number of bits, characters or blocks transferred per unit of time.

The theoretical upper bound on the speed of information transmission is determined by the Shannon-Hartley theorem.

SHANNON-HARTLEY THEOREM

The channel capacity C, meaning the theoretical upper bound on the data rate that can be transmitted with a given average signal power S through an analog communication channel subject to additive white Gaussian noise of power N, is:

C=B∙log 2 (1+S/N),

where C – channel capacity, bit/s; B – channel bandwidth, Hz; S – total signal power, W; N – noise power, W.

· bandwidth;

· attenuation;

· noise immunity;

· throughput;

· unit cost.

Bandwidth is a continuous range of frequencies for which the ratio of the amplitude of the output signal to the input signal exceeds some predetermined limit, usually 0.5. Bandwidth has the greatest influence on the maximum possible speed of information transmission over a communication line.

The bandwidth depends on the type of line and its length. The slide shows the bandwidths of various types of communication lines, as well as the frequency ranges most commonly used in communications technology.

Characteristics of communication channels. Attenuation

The communication line distorts the transmitted data because her physical parameters differ from ideal. The communication line is a distributed combination of active resistance, inductive and capacitive load.

Types of characteristics and methods for their determination.

The main characteristics of communication lines include:

· amplitude-frequency response;

· bandwidth;

· attenuation;

· noise immunity;

· crosstalk at the near end of the line;

· throughput;

· reliability of data transmission;

· unit cost.

First of all, a computer network developer is interested in the throughput and reliability of data transmission, since these characteristics directly affect the performance and reliability of the created network. Throughput and reliability are characteristics of both the communication line and the method of data transmission. Therefore, if the transmission method (protocol) has already been defined, then these characteristics are also known. For example, the bandwidth of a digital line is always known, since a physical layer protocol is defined on it, which specifies the bit rate of data transfer - 64 Kbps, 2 Mbps, etc.

However, you cannot talk about the throughput of a communication line until a physical layer protocol has been defined for it.

Frequency response, bandwidth and attenuation

The amplitude-frequency characteristic shows how the amplitude of a sinusoid at the output of a communication line attenuates compared to the amplitude at its input for all possible frequencies of the transmitted signal. Instead of amplitude, this characteristic often uses a signal parameter such as its power.

In practice, instead of the frequency response, other, simplified characteristics are used - bandwidth and attenuation.

Attenuation defined as the relative decrease in amplitude or power of a signal when a signal of a certain frequency is transmitted along a line. Thus, attenuation represents one point from the amplitude-frequency characteristic of the line. Often, when operating a line, the fundamental frequency of the transmitted signal is known in advance, that is, the frequency whose harmonic has the greatest amplitude and power. Therefore, it is enough to know the attenuation at this frequency to approximately estimate the distortion of the signals transmitted along the line.

Attenuation A is usually measured in decibels and is calculated using the following formula:

A = 10 log (Pout/Pin),

Since the output signal power of a cable without intermediate amplifiers is always less than the input signal power, the cable attenuation is always a negative value.

For example, a Category 5 twisted pair cable is characterized by an attenuation of at least -23.6 dB for a frequency of 100 MHz with a cable length of 100 m. The frequency of 100 MHz was chosen because the cable of this category is intended for high-speed data transmission, the signals of which have significant harmonics with the frequency approximately 100 MHz.

Category 3 cable is intended for low-speed data transmission, so attenuation at a frequency of 10 MHz (not lower than -11.5 dB) is specified for it. Often they operate with absolute values ​​of attenuation, without indicating the sign.

Absolute power level, such as transmitter power level, is also measured in decibels. In this case, a value of 1 mW is taken as the base value of the signal power, relative to which the current power is measured. Thus, the power level p is calculated using the following formula:

p = 10 log (P/1mW) [dBm],

where P is the signal power in milliwatts, and dBm is the unit of power level (decibel per mW).

Thus, amplitude-frequency response, bandwidth and attenuation are universal characteristics, and their knowledge allows us to draw a conclusion about how signals of any shape will be transmitted through a communication line.

Characteristics of communication channels. Noises

The communication line distorts the transmitted data because her physical parameters differ from ideal. The communication line is a distributed combination of active resistance, inductive and capacitive load.

Types of characteristics and methods for their determination.

The main characteristics of communication lines include:

· amplitude-frequency response;

· bandwidth;

· attenuation;

· noise immunity;

· crosstalk at the near end of the line;

· throughput;

· reliability of data transmission;

· unit cost.

First of all, a computer network developer is interested in the throughput and reliability of data transmission, since these characteristics directly affect the performance and reliability of the created network. Throughput and reliability are characteristics of both the communication line and the method of data transmission. Therefore, if the transmission method (protocol) has already been defined, then these characteristics are also known. For example, the bandwidth of a digital line is always known, since a physical layer protocol is defined on it, which specifies the bit rate of data transfer - 64 Kbps, 2 Mbps, etc.

However, you cannot talk about the throughput of a communication line until a physical layer protocol has been defined for it.

Noises

The higher the frequency of the periodic carrier signal, the more information per unit time is transmitted along the line and the higher the line capacity with a fixed physical encoding method. However, on the other hand, as the frequency of the periodic carrier signal increases, the spectrum width of this signal also increases. The line transmits this spectrum of sinusoids with those distortions that are determined by its passband. This does not mean that signals cannot be transmitted. The greater the discrepancy between the line bandwidth and the width of the spectrum of transmitted information signals, the more the signals are distorted and the more likely errors are in the recognition of information by the receiving side, which means that the speed of information transmission actually turns out to be lower than one might expect.

The relationship between the bandwidth of a line and its maximum possible throughput, regardless of the adopted physical encoding method, was established Claude Shannon:

Shannon's formula:

С = F Iog2 (1 + Рс/Рш),

where C is the maximum line capacity in bits per second,

F is the line bandwidth in hertz,

Рс - signal power,

Рш - noise power.

From this relationship it is clear that although there is no theoretical limit to the capacity of a fixed-bandwidth link, in practice there is such a limit. Indeed, it is possible to increase the throughput of a line by increasing the transmitter power or reducing the noise power (interference) on the communication line. Both of these components are very difficult to change. Increasing the transmitter power leads to a significant increase in its size and cost. Reducing the noise level requires the use of special cables with good protective screens, which is very expensive, as well as reducing noise in the transmitter and intermediate equipment, which is not easy to achieve.

In addition, the influence of the useful signal powers and noise on the throughput is limited by a logarithmic dependence, which does not grow as quickly as a directly proportional one. Thus, with a fairly typical initial ratio of signal power to noise power of 100 times, increasing the transmitter power by 2 times will only provide a 15% increase in line capacity.

The noise immunity of a line determines its ability to reduce the level of noise generated in the external environment on internal conductors. The noise immunity of a line depends on the type of physical medium used, as well as on the shielding and noise-suppressing means of the line itself. Radio lines are the least resistant to interference; cable lines have good resistance and fiber-optic lines, which are insensitive to external electromagnetic radiation, have excellent resistance. Typically, to reduce interference caused by external electromagnetic fields, conductors are shielded and/or twisted.

During the transmission of certain signals, the high-frequency current in the radio transmitter antenna consists of several currents of different frequencies. Electromagnetic waves propagating from the transmitter antenna and currents arising under the influence of radio waves in the receiving antenna have the same complex nature.

For each type of transmission (radio telephony, radio telegraphy, television transmission, etc.), the frequencies of these currents occupy a certain band. For medium wave broadcasting it is approximately 9 kHz, i.e. the broadcast transmitter creates a complex current consisting of several currents whose highest frequency is 9 kHz higher than the lowest frequency. For example, for a broadcast transmitter operating at a frequency of 173 kHz (? = 1734 m), these will be frequencies from 168.5 to 177.5 kHz. In the case of official radiotelephone communications, the frequency band is no more than 2 - 2.5 kHz, and for radiotelegraph transmission it is even less. But during television transmission, the frequency band expands to several megahertz.

When a circuit is exposed to electromotive forces of different frequencies, the strongest oscillations are obtained when the emf has a resonant frequency or a frequency close to it. And with a significant deviation of the frequency of the external emf from the resonant value, i.e., when the circuit is detuned relative to the frequency of the external emf, the amplitude of the oscillations turns out to be relatively small.

We can say that each circuit transmits vibrations well within a certain frequency band located on both sides of the resonant frequency. It is called the passband of the Ppr circuit and is conventionally determined from the resonance curve at a level of 0.7 from the maximum value of current or voltage corresponding to the resonant frequency (Fig. 1).

Fig. 1 - Circuit bandwidth

In other words, it is believed that the circuit transmits vibrations well when their amplitude decreases by no more than 30% compared to the amplitude at resonance. The bandwidth of a circuit is sometimes also called the width of the resonance curve. The quality of the circuit affects the shape of the resonance curve. From this figure it can be seen that the lower the quality of the circuit, the greater its bandwidth. In addition, the bandwidth is greater at a higher resonant frequency of the circuit.

The dependence of the circuit bandwidth on its attenuation or quality factor Q is given by the following simple formula

For example, a circuit tuned to a frequency fo = 2000 kHz and having attenuation? = 0.01, has a bandwidth Ppr = 0.01 * 2000 = 20 kHz.

As you can see, to obtain a narrow bandwidth, it is necessary to use a circuit with a high quality factor, and to obtain a wide bandwidth, a circuit with a quality factor, or operate at a very high resonant frequency.

From the above formula it follows that fo = Q * Ppp. Since an average quality circuit has a Q of at least 20, the operating frequency must be at least 20 times higher than the bandwidth. For example, a television transmission for which the PPR is several megahertz must be conducted at frequencies not lower than several tens of megahertz, i.e. on ultrashort waves.

It is desirable that the circuit have a bandwidth corresponding to the frequency band that is characteristic of this type of transmission. If the bandwidth is smaller, then distortion will result due to poor transmission of some vibrations. A wider band is undesirable, as there may be interference from signals from radio stations operating on adjacent frequencies.

If a wide bandwidth is required, then low-Q circuits must often be used. The quality factor of the circuit decreases, and the bandwidth increases if an active resistance R, called a shunt resistance, is connected parallel to the circuit (Fig. 2). Indeed, the alternating voltage U present on the circuit is applied to the resistance R and creates a current in it. Therefore, power will be wasted in this resistance. The lower the resistance R, the greater the power loss and the greater the attenuation of the circuit. If the resistance R is very small, then it will short-circuit one of the circuit elements (the capacitor on (Fig. 2 a) or the entire circuit (Fig. 2 b). Then the circuit will not be able to work at all as an oscillatory system and exhibit its resonant properties.

Fig. 1 - Bypassing the circuit with active resistance

Shunting a circuit with active resistance is sometimes done specifically to expand the bandwidth. In addition, such shunting exists due to the fact that the circuit is connected to other parts and circuits. As a result, an undesirable deterioration in the quality of the circuit occurs.

The internal resistance of the generator feeding the parallel circuit also affects the quality factor of the circuit and its bandwidth. This can be easily explained as follows.

Let the generator stop working at some point. Then the oscillations in the circuit will begin to attenuate, and the internal resistance of the generator connected to the circuit will play the role of a shunt resistance, increasing the attenuation.

The greater the Ri of the generator, the weaker its influence, which means the resonance curve of the circuit is sharper and its bandwidth is smaller, i.e. the resonant properties of the circuit are more pronounced. With a small Ri of the generator, the quality factor of the circuit decreases so much and the passband becomes so wide that the resonant properties of the circuit are practically absent.

We came to a similar conclusion about the influence of the Ri generator earlier when considering the operation of a parallel circuit.

Very often, when communicating with IT specialists, the slow performance of corporate applications is blamed on the network department or narrow communication channels. The simplest solution to all problems is more bandwidth (wider channel) and fewer left-handed applications in the channel (fewer competitors for the bandwidth) and then everything will fly. Of course, you need to pay attention to the cleanliness of communication channels and their use, but these are not the only parameters. The simplest solution for assessing the state of channels is Flow technologies and data correlation between the performance of the key application and data from NetFlow (jFlow, Sflow, etc.).

In data networks, latency is a fact of life. Understanding their nature, you can reduce the negative effect, thereby increasing the quality of communication. Network delays are defined by ITU standards and must be within certain limits:

The sequential principle of transmitting packets over a communication channel introduces delays. The delay in transmitting information from one user to another consists of several components and they can be divided into two large classes - fixed and variable.

Variable delays mainly include delays in queues at each network node: router, switch, network adapter. Fixed - packetization delay, sequential delay, codec delay (for video or audio). The transmission medium can be copper pair, fiber optic cable or ether. In this case, the amount of delay depends on the clock frequency and, to a much lesser extent, on the speed of light in the transmission medium.

Cisco documentation has this table that allows you to estimate the sequential delay depending on the length of the packets and the width of the communication channel:

Frame size (bytes)

Channel transmission rate (Kbit/s)

To transmit a 1518-byte frame (the maximum length for Ethernet) over a 64-kbps link, the serial latency reaches 185 ms. If packets 64 bytes long are transmitted over the same channel, the delay will be only 8 ms, i.e. the shorter the packet, the faster it will reach the receiving side. Therefore, short UDP packets are used for voice transmission, which minimize the amount of delay, and developers of data transmission equipment, on the contrary, strive to increase the length of frames to reduce the amount of service traffic. To calculate the serial delay, you can use the formula:

Serial delay = ((number of bytes to send or receive) x (8 bits))/ (slowest link speed)

For example, the sequential latency to send 100 KB and receive 1 MB over a 2 Mbit/s link would be:

Transfer: (100,000 * 8) / 2,048,000 = 390 ms

Receive: (1,024,000 *8) / 2,048,000 = 4000 ms

Of course, serial latency is one of the components and each of the streams will additionally be affected by latency in communication channels, jitter, etc. This formula will show an ideal picture when other users or applications do not compete for the communication channel. This can be seen in the diagram, which shows the actual speed of the communication channel when transferring a 200 KB file via FTP and a 10 Mbit/s channel.

We see that the speed during the transmission process is not constant. Since the network is a shared medium, packets as they are transmitted over the network end up in queues, are lost, and a medium access control algorithm is activated, which prevents one user from capturing the entire communication channel. All this affects the transfer speed and, as a result, the speed of the application.

How to increase the speed of applications without changing the bandwidth of the communication channel?

Naturally, the easiest way out is to increase the width of the communication channel, but sometimes this is not possible or is very expensive for corporate clients. In this case, it is logical to reduce the amount of data transmitted in the communication channel. There are several ways to reduce the volume. Data compression, the use of thin clients, caching, the use of traffic optimization solutions - this can sometimes reduce traffic by 2 to 5 times (different applications are compressed differently).

It is also possible to understand the traffic structure and how the communication channel is actually used using Flow technologies and then, by prioritizing traffic, reduce possible packet losses and the growth of queues in active equipment.

The term "bandwidth" is often used when describing electronic communications networks. This is one of the key characteristics of such systems. At first glance, it may seem that a person whose work has nothing to do with communication lines does not need to understand what channel bandwidth is. In reality, everything is a little different. Many people have a home personal computer connected to And everyone knows that sometimes work with the World Wide Web slows down for no apparent reason. One of the reasons for this is that at that very moment the provider's channel bandwidth becomes overloaded. The result is a clear slowdown and possible malfunctions. Before we define the concept of “bandwidth,” let’s use an example that allows anyone to understand what we are talking about.

Let's imagine a highway in a small provincial town and in a densely populated metropolis. In the first case, most often it is designed for one or two traffic flows, respectively, the width is small. But in large cities, even four-lane traffic will not surprise anyone. During the same time, the number of cars traveling the same distance on these two roads is significantly different. It depends on two characteristics - the speed of movement and the number of lanes. In this example, the road is and the cars are bits of information. In turn, each band is a communication line.

In other words, bandwidth indirectly indicates how much data can be transmitted per unit of time. The higher this parameter, the more comfortable it is to work through such a connection.

If everything is obvious with the transmission speed (it increases with decreasing signal transmission delays), then the term “bandwidth” is a little more complicated. As you know, in order for a signal to transmit information, it is transformed in a certain way. In relation to electronics, this can be either mixed modulation. However, one of the features of transmission is that several pulses with different frequencies can be simultaneously transmitted along the same conductor (within the general bandwidth, as long as the distortions are within acceptable limits). This feature allows you to increase the overall performance of the communication line without changing delays. A striking example of the coexistence of frequencies is the simultaneous conversation of several people with different timbres. Although everyone speaks, everyone’s words are quite distinguishable.

Why is there sometimes a slowdown when working with the network? Everything is explained quite simply:

The higher the delay, the lower the speed. Any interference with the signal (software or physical) reduces performance;

Often includes additional bits that perform redundant functions - so-called "redundancy". This is necessary to ensure operability in conditions of interference on the line;

The physical limit of the conductive medium has been reached, when all valid ones have already been used and with new pieces of data they are placed in a queue for sending.

To solve such problems, providers take several different approaches. This could be virtualization, which increases the “width” but introduces additional delays; enlargement of the channel due to “extra” conducting media, etc.

In digital technology the term "baud" is sometimes used. In fact, it means the number of bits of data transferred per unit of time. In the days of slow communication lines (dial-up), 1 baud corresponded to 1 bit per 1 second. Later, as speeds increased, “baud” ceased to be universal. It could mean 1, 2, 3 or more bits per second, which required a separate indication, so currently a different system is used that everyone understands.



 

It might be useful to read: