What is the difference between analog control and discrete control? How is a measuring signal different from a signal? Give examples of measuring signals used in various fields of science and technology. Main characteristics of cables

Lecture No. 1

"Analog, discrete and digital signals."

The two most fundamental concepts in this course are the concepts of signal and system.

Under the signalunderstood physical process(eg time-varying voltage) displaying some information or message. Mathematically, a signal is described by a function of a certain type.

One-dimensional signals are described by real or complex function, defined on the interval of the real axis (usually the time axis). An example of a one-dimensional signal is the electric current in a microphone wire, which carries information about the perceived sound.

Signal x(t ) is called bounded if there is a positive number A , such that for anyone t.

Signal energy x(t ) is called the quantity

,(1.1)

If , then they say that the signal x(t ) has limited energy. Signals with limited energy have the property

If a signal has limited energy, then it is limited.

Signal strength x(t ) is called the quantity

,(1.2)

If , then they say that the signal x(t ) has limited power. Signals with limited power can take non-zero values ​​indefinitely.

In reality, signals with unlimited energy and power do not exist. Majority signals that exist in real nature are analog.

Analog signals are described by a continuous (or piecewise continuous) function, and the function itself and the argument t can take any values ​​on some intervals . In Fig. 1.1a provides an example analog signal, changing over time according to the law, where . Another example of an analog signal, shown in Figure 1.1b, varies with time according to the law.



An important example An analog signal is a signal described by the so-called "unit function", which is described by the expression

(1.3),

Where .

The graph of the unit function is shown in Fig. 1.2.


Function 1(t ) can be considered as the limit of the family of continuous functions 1(a, t ) when changing a parameter of this familya.

(1.4).

Graph family 1(a, t ) at different valuesapresented in Fig. 1.3.


In this case, function 1( t ) can be written as

(1.5).

Let us denote the derivative of 1(a, t ) as d(a,t).

(1.6).

Family of graphsd(a, t ) is presented in Fig. 1.4.



Area under the curved(a, t ) does not depend onaand is always equal to 1. Indeed

(1.7).

Function

(1.8)

called Dirac impulse function ord - function. Values d - functionsare equal to zero at all points except t =0. At t =0 d-function is equal to infinity, but in such a way that the area under the curved- function is equal to 1. Figure 1.5 shows the graph of the functiond(t) and d(t - t).


Let's note some propertiesd- Features:

1. (1.9).

This follows from the fact that only at t = t.

2. (1.10) .

In the integral, infinite limits can be replaced by finite ones, but so that the argument of the functiond(t - t) vanished within these limits.

(1.11).

3. Conversion Laplaced-functions

(1.12).

IN in particular, whent=0

(1.13).

4. Fourier transformd- functions. When p = j v from 1.13 we get

(1.14)

At t=0

(1.15),

those. range d- function is equal to 1.

Analog signal f(t ) is called periodic if there is a real number T, such that f (t + T)= f (t) for any t. In this case T is called the period of the signal. An example of a periodic signal is the signal presented in Fig. 1.2a, and T =1/ f . Another example of a periodic signal is the sequenced- functions described by the equation

(1.16)

schedulewhich is shown in Fig. 1.6.


Discrete signals differ from analog signals in that their values ​​are known only at discrete moments in time. Discrete signals are described by lattice functions - sequences -x d(nT), where T = const – sampling interval (period), n =0,1,2,…. The function itself x d(nT) can at discrete moments take arbitrary values ​​over a certain interval. These function values ​​are called samples or samples of the function. Another notation for the lattice function x ( nT) is x(n) or x n. In Fig. 1.7a and 1.7b show examples of lattice functions and . Subsequence x(n ) can be finite or infinite, depending on the interval of definition of the function.



The process of converting an analog signal into a discrete one is called time sampling. Mathematically, the process of time sampling can be described as modulation by an input analog signal of a sequenced- functions d T(t)

(1.17)

The process of restoring an analog signal from a discrete one is called time extrapolation.

For discrete sequences, the concepts of energy and power are also introduced. Energy of sequence x(n ) is called the quantity

,(1.18)

Power sequence x(n ) is called the quantity

,(1.19)

For discrete sequences, the same patterns regarding power and energy limitation remain as for continuous signals.

Periodiccalled a sequence x ( nT), satisfying the condition x ( nT)= x ( nT+ mNT), where m and N - whole numbers. Wherein N called the sequence period. It is enough to set a periodic sequence on a period interval, for example at .

Digital signals are discrete signals that at discrete moments in time can take only a finite series of discrete values ​​- quantization levels. The process of converting a discrete signal into a digital one is called quantization by level. Digital signals are described by quantized lattice functionsx ts(nT). Examples of digital signals are shown in Fig. 1.8a and 1.8b.



Relationship between lattice functionx d(nT) and quantized lattice function x ts(nT) is determined by the nonlinear quantization function x ts(nT)= Fk(x d(nT)). Each quantization level is coded with a number. Typically used for these purposes binary coding, so that the quantized samplesx ts(nT) are encoded as binary numbers with n discharges. Number of quantization levels N and the smallest number of binary digits m , with which all these levels can be encoded, are related by the relation

,(1.20)

Where int(x ) – the smallest integer not less than x.

Thus, quantization of discrete signals consists of representing the signal samplex d(nT) using a binary number containing m discharges. As a result of quantization, the sample is represented with an error, which is called the quantization error

.(1.21)

Quantization step Q determined by the weight of the least significant binary digit of the resulting number

.(1.22)

The main quantization methods are truncation and rounding.

Truncation to m -bit binary number consists of discarding all the low-order bits of the number except n seniors In this case, the truncation error. For positive numbers in any coding method . For negative numbers using direct code the truncation error is non-negative, and when using additional code this error is not positive. Thus, in all cases, the absolute value of the truncation error does not exceed the quantization step:

.(1.23)

The graph of the additional code truncation function is shown in Fig. 1.9, and the direct code – in Fig. 1.10.




Rounding differs from truncation in that, in addition to discarding the lower digits of the number, it also modifies m- th (junior non-discardable) digit of the number. Its modification is that it either remains unchanged or increases by one, depending on whether the discarded part of the number is larger or smaller. Rounding can practically be accomplished by adding one to ( m +1) – muridigit of the number with subsequent truncation of the resulting number to n discharges. The rounding error for all coding methods lies within and therefore

.(1.24)

The graph of the rounding function is shown in Fig. 1.11.



Consideration and use of various signals assumes the ability to measure the value of these signals at given points in time. Naturally, the question arises about the reliability (or, conversely, uncertainty) of measuring the value of signals. Deals with these issues information theory, the founder of which is K. Shannon. The main idea of ​​information theory is that information can be treated in much the same way as physical quantities such as mass and energy.

We usually characterize the accuracy of measurements by the numerical values ​​of the errors obtained during measurement or the estimated errors. In this case, the concepts of absolute and relative errors are used. If the measuring device has a measuring range from x 1 to x 2 , with absolute error± D, independent of the current value x measured quantity, then having received the measurement result in the form x n we are recording how is itx n± Dand is characterized by relative error .

Consideration of these same actions from the perspective of information theory is of a slightly different nature, differing in that all of the listed concepts are given a probabilistic, statistical meaning, and the result of the measurement is interpreted as a reduction in the area of ​​uncertainty of the measured value. In information theory, the fact that a measuring device has a measuring range from x 1 to x 2 means that when using this instrument, readings can only be obtained within the range of x 1 to x 2 . In other words, the probability of receiving samples less than x 1 or large x 2 , is equal to 0. The probability of receiving samples is somewhere in the range from x 1 to x 2 is equal to 1.

If we assume that all measurement results in the range from x 1 to x 2 are equally probable, i.e. Since the probability distribution density for different values ​​of the measured quantity along the entire scale of the device is the same, then from the point of view of information theory, our knowledge about the value of the measured quantity before measurement can be represented by a graph of the probability density distribution p (x).

Since the total probability of getting a reading is somewhere between x 1 to x 2 equals 1, then the curve must contain an area equal to 1, which means that

(1.25).

After the measurement, we obtain a device reading equal tox n. However, due to the instrument error equal to± D, we cannot claim that the measured quantity is exactly equalx n. Therefore we write the result in the formx n± D. This means that the actual value of the measured quantity x lies somewhere betweenx n- D before x n+ D. From the point of view of information theory, the result of our measurement is only that the area of ​​uncertainty has been reduced to a value of 2DAnd characterized much higher probability density

(1.26).

Obtaining any information about the quantity of interest to us consists, therefore, in reducing the uncertainty of its value.

As a characteristic of the uncertainty of the value of some random variable, K. Shannon introduced the concept entropy quantities x , which is calculated as

(1.27).

The units used to measure entropy depend on the choice of logarithm base in the given expressions. When using decimal logarithms, entropy is measured in so-called. decimal units or ditah. In the case of using binary logarithms, entropy is expressed in binary units or bits.

In most cases, the uncertainty of knowledge about the meaning of a signal is determined by the effect of interference or noise. The misinformation effect of noise during signal transmission is determined by the entropy of noise as a random variable. If noise in a probabilistic sense does not depend on the transmitted signal, then, regardless of the signal’s statistics, a certain amount of entropy can be assigned to noise, which characterizes its disinformation effect. In this case, the system can be analyzed separately for noise and signal, which greatly simplifies the solution of this problem.

Shannon's theorem on the amount of information. If a signal with entropy is applied to the input of the information transmission channel H( x), and the noise in the channel has entropy H(D ) , then the amount of information at the channel output is determined as

(1.28).

If, in addition to the main signal transmission channel, there is additional channel, then to correct errors arising from noise with entropy H ( D), through this channel it is necessary to transmit additional quantity information no less than

(1.29).

This data can be encoded in such a way that it will be possible to correct all errors caused by noise, except for an arbitrarily small fraction of these errors.

In our case, for a uniformly distributed random variable, entropy is defined as

(1.30),

and the remaining one or conditional entropy measurement result after receiving the readingx n equal to

(1.31).

Hence, the resulting amount of information equal to the difference between the original and remaining entropy is equal to

(1.32).

When analyzing systems with digital signals, quantization errors are considered as a stationary random process with a uniform probability distribution over the range of the quantization error distribution. In Fig. 1.12a, b and c show the probability densities of the quantization error when rounding the complementary code, direct code and truncation, respectively.



Obviously, quantization is a nonlinear operation. However, the analysis uses a linear model of signal quantization, presented in Fig. 1.13.

m – bit digital signal, e( nT) – quantization error.

Probabilistic estimates of quantization errors are made by calculating the mathematical expectation

(1.33)

and variance

(1.34),

Wherep e– error probability density. For cases of rounding and truncation we will have

(1.35),

(1.36).

Time sampling and quantization by signal level are integral features of all microprocessor control systems, determined by the limited speed and finite bit capacity of the microprocessors used.

Any system digital processing signals, regardless of its complexity, contains digital computing device- universal digital computer, microprocessor or specially designed to solve specific task computing device. The signal arriving at the input of a computing device must be converted to a form suitable for processing on a computer. It must be in the form of a sequence of numbers represented in the machine code.

In some cases, the presentation task input signal in digital form it is relatively simple to solve. For example, if you need to transmit verbal text, then each symbol (letter) of this text needs to be associated with a certain number and, thus, represent transmitted signal as a number sequence. The ease of solving the problem in this case is explained by the fact that the verbal text is discrete in nature.

However, most of the signals encountered in radio engineering are continuous. This is due to the fact that the signal is a reflection of some physical process, and almost all physical processes are continuous in nature.

Let's consider the process of sampling a continuous signal into specific example. Let's say that air temperature is being measured on board a certain spacecraft; The measurement results must be transmitted to Earth to a data processing center. Temperature

Rice. 1.1. Types of signals: a - continuous (continuous) signal; 6 - discrete signal; c - AIM oscillation; g - digital signal

air is measured continuously; The temperature sensor readings are also a continuous function of time (Fig. 1.1, a). But the temperature changes slowly; it is enough to transmit its values ​​once a minute. In addition, there is no need to measure it with an accuracy higher than 0.1 degrees. Thus, instead of a continuous function, you can transmit a sequence at intervals of 1 minute numerical values(Fig. 1.1, d), and in the intervals between these values ​​you can transmit information about pressure, air humidity and other scientific information.

The considered example shows that the process of sampling continuous signals consists of two stages: sampling by time and sampling by level (quantization). A signal sampled only in time is called discrete; it is not yet suitable for processing in digital device. A discrete signal is a sequence whose elements are exactly equal to the corresponding values ​​of the original continuous signal (Fig. 1.1, b). An example of a discrete signal can be a sequence of pulses with varying amplitude - an amplitude-pulse-modulated oscillation (Fig. 1.1, c). Analytically, such a discrete signal is described by the expression

where is the original continuous signal; single pulse of AIM oscillation.

If we reduce the pulse duration while keeping its area unchanged, then in the limit the function tends to the - function. Then the expression for the discrete signal can be represented as

To convert an analog signal to a digital signal, time sampling must be followed by level sampling (quantization). The need for quantization is caused by the fact that any computing device can only operate with numbers that have a finite number of digits. Thus, quantization is the rounding of transmitted values ​​with a given accuracy. So in the example considered, temperature values ​​are rounded to three significant figures (Fig. 1.1, d). In other cases, the number of bits of the transmitted signal values ​​may be different. A signal that is sampled both in time and in level is called digital.

The correct choice of sampling intervals by time and level is very important when developing digital systems signal processing. The smaller the sampling interval, the more closely the sampled signal corresponds to the original continuous one. However, as the sampling interval decreases in time, the number of samples increases, and in order to keep the total signal processing time unchanged, it is necessary to increase the processing speed, which is not always possible. As the quantization interval decreases, more bits are required to describe the signal, as a result of which the digital filter becomes more complex and cumbersome.

A person talks on the phone every day, watches various TV channels, listens to music, and surfs the Internet. All means of communication and other information environment based on signal transmission various types. Many people ask questions about what is different analog information from other types of data, what is a digital signal. The answer to them can be obtained by understanding the definition of various electrical signals and studying their fundamental differences between each other.

Analog signal

An analog signal (continuous) is a natural information signal that has a certain number of parameters that are described by a time function and a continuous set of all possible values.

Human senses capture all information from environment in analog form. For example, if a person sees a truck passing nearby, then its movement is observed and changes continuously. If the brain received information about the movement of vehicles once every 15 seconds, then people would always fall under its wheels. A person evaluates distance instantly, and at each moment in time it is defined and different.

The same thing happens with other information - people hear sound and evaluate its volume, evaluate the quality of the video signal, and the like. Accordingly, all types of data are analog in nature and are constantly changing.

On a note. Analog and digital signals are involved in transmitting the speech of interlocutors who communicate by telephone; the Internet operates on the basis of the exchange of these signal channels via network cable. These types of signals are electrical in nature.

An analog signal is described by a mathematical time function similar to a sine wave. If you take measurements, for example, of water temperature, periodically heating and cooling it, then the graph of the function will display a continuous line that reflects its value in each time period.

To avoid interference, such signals must be amplified by special means and instruments. If the level of signal interference is high, then it needs to be amplified more. This process is accompanied by large expenditures of energy. An amplified radio signal, for example, can often itself become an interference for other communication channels.

Interesting to know. Analog signals were previously used in all types of communications. However, now it is being replaced everywhere or has already been replaced (mobile communications and the Internet) by more advanced digital signals.

Analog and digital television coexist together for now, but the digital type of television and radio broadcasting with high speed replaces the analogue method of data transmission due to its significant advantages.

To describe this type of information signal, three main parameters are used:

  • frequency;
  • wave length;
  • amplitude.

Disadvantages of an analog signal

An analog signal has the following properties, which show their difference from the digital version:

  1. This type of signal is characterized by redundancy. That is, the analog information in them is not filtered - they carry a lot of unnecessary information data. However, it is possible to pass information through a filter, knowing Extra options and the nature of the signal, for example, by the frequency method;
  2. Safety. He is almost completely helpless against unauthorized intrusions from the outside;
  3. Absolute helplessness in the face of various types of interference. If any interference is imposed on the data transmission channel, it will be transmitted unchanged by the signal receiver;
  4. Lack of specific differentiation of sampling levels – quality and quantity transmitted information is not limited by anything.

The above properties are disadvantages analogue method transmission of data on the basis of which it can be considered completely obsolete.

Digital and discrete signals

Digital signals are artificial information signals, presented in the form of regular digital values ​​that describe specific parameters of the transmitted information.

For information. Nowadays, a simple-to-encode bit stream is predominantly used - a binary digital signal. This is the type that can be used in binary electronics.

Difference digital type transmission of data from the analog version is that such a signal has a specific number of values. In the case of a bit stream, there are two of them: “0” and “1”.

The transition from zero to maximum in a digital signal is abrupt, allowing the receiving equipment to read it more clearly. If certain noise and interference occurs, it will be easier for the receiver to decode a digital electrical signal than with analog information transmission.

However, digital signals differ from the analog version in one drawback: when high level interference, it is impossible to restore them, but it is possible to extract information from the continuum signal. An example of this would be a telephone conversation between two people, during which entire words and even phrases of one of the interlocutors may disappear.

This effect in the digital environment is called the break effect, which can be localized by reducing the length of the communication line or installing a repeater, which completely copies the original type of signal and transmits it further.

Analog information can be transmitted over digital channels, having gone through the digitization process special devices. This process is called analog-to-digital conversion (ADC). This process It can also be the opposite - digital-to-analog conversion (DAC). An example of a DAC device would be a digital TV receiver.

Digital systems are also distinguished by the ability to encrypt and encode data, which has become an important reason for digitization mobile communications and the Internet.

Discrete signal

There is a third type of information – discrete. A signal of this kind is intermittent and changes over time, taking on any of the possible (prescribed in advance) values.

Discrete information transfer is characterized by the fact that changes occur according to three scenarios:

  1. The electrical signal changes only in time, remaining continuous (unchanged) in magnitude;
  2. It changes only in magnitude, while remaining continuous in time;
  3. It can also change simultaneously in both magnitude and time.

Discreteness has found application in packet transmission large amounts of data in computing systems.

We considered various definitions concept of “information” and came to the conclusion that information can be defined by a set in different ways depending on the chosen approach. But we can speak clearly about one thing: information - knowledge, data, information, characteristics, reflections, etc. - category intangible . But we live in a material world. Therefore, in order to exist and spread in our world, information must be associated with some kind of material basis. Without it, information cannot be transmitted and stored.

Then the material object (or environment) with the help of which this or that information is presented will be information carrier , and we will call a change in any characteristic of the carrier signal .
For example, imagine a uniformly burning light bulb; it does not convey any information. But, if we turn the light bulb on and off (that is, change its brightness), then with the help of alternating flashes and pauses we can convey some message (for example, through Morse code). Likewise, a uniform hum does not convey any information, but if we change the pitch and volume of the sound, we can form some kind of message (which is what we do with spoken language).

In this case, signals can be of two types: continuous (or analog ) And discrete .
The textbook gives the following definitions.

Continuous the signal takes on many values ​​from a certain range. There are no breaks between the values ​​it takes.
Discrete the signal takes on a finite number of values. All values ​​of a discrete signal can be numbered with integers.

Let us clarify these definitions a little.
The signal is called continuous(or analog) if its parameter can accept any value within a certain interval.

The signal is called discrete, if its parameter can take final the number of values ​​within a certain interval.

The graphs of these signals look like this:

Examples continuous signals can be music, speech, images, thermometer readings (the height of the mercury column can be any and represents a series of continuous values).

Examples discrete signals may be indications of mechanical or electronic watch, texts in books, digital readings measuring instruments etc.

Let's return to the examples discussed at the beginning of the message - a flashing light bulb and human speech. Which of these signals is continuous and which is discrete? Answer in the comments and justify your answer. Is it possible continuous information convert to discrete? If yes, please provide examples.

The signal is information function, carrying a message about the physical properties, condition or behavior of any physical system, object or environment, and the goal of signal processing can be considered to be the extraction of certain information information, which are displayed in these signals (briefly - useful or target information) and converting this information into a form convenient for perception and further use.

An informative parameter of a signal can be any parameter of the signal carrier that is functionally associated with the values ​​of information data.

The signal itself in a general sense, this is the dependence of one quantity on another, and from a mathematical point of view it is a function.

The most common representation of signals is in electrical form in the form of voltage versus time U(t).

By “analysis” of signals we mean not only their purely mathematical transformations, but also drawing conclusions about specific features relevant processes and objects.

The term is inextricably linked with the concept of signal registration signals, the use of which is as broad and ambiguous as the term signal itself.

In the most general sense, this term can be understood as the operation of isolating a signal and converting it into a form convenient for further use.

Analog signal (AC)

Most signals are analog in nature, that is, they change continuously over time and can take on any value over a certain interval. Analog signals are described by some mathematical function time.

Example of an AC - harmonic signal - s(t) = A·cos(ω·t + φ).

Analog signals are used in telephony, radio broadcasting, and television. It is impossible to enter such a signal into a computer and process it, since at any time interval it has an infinite number of values, and for an accurate (without error) representation of its value, numbers of infinite depth are required. Therefore, it is necessary to convert the analog signal so that it can be represented as a sequence of numbers of a given bit depth.

Sampling of an analog signal consists of representing the signal as a sequence of values ​​taken at discrete moments in time. These values ​​are called counts.Δt is called sampling interval.

Quantized signal

During quantization, the entire range of signal values ​​is divided into levels, the number of which must be represented in numbers of a given bit depth. The distance between these levels is called the quantization step Δ. The number of these levels is N (from 0 to N-1). Each level is assigned a number. The signal samples are compared with the quantization levels and a number corresponding to a certain quantization level is selected as a signal. Each quantization level is encoded as a binary number with n bits. Number of quantization levels N and number of bits n binary numbers , encoding these levels, are related by the relation n ≥ log 2 (N).

Digital signal

In order to represent an analog signal as a sequence of finite-bit numbers, it must first be converted into a discrete signal and then subjected to quantization. Quantization is a special case of discretization, when discretization occurs by the same value called a quantum. As a result, the signal will be presented in such a way that at each given time interval the approximate (quantized) value of the signal is known, which can be written down integer. If we write these integers in binary system , you get a sequence of zeros and ones, which will be a digital signal.

The transmission, emission and reception of messages via electromagnetic systems is called telecommunications.

Signals, like messages, can be continuous And discrete. The information parameter of a continuous signal over time can take on any instantaneous values ​​within certain limits.

The continuous signal is often called analog.

A discrete signal is characterized by a finite number of information parameter values. Often this parameter takes only two values. Let's consider graphical model, displaying fundamental differences generation of analog and discrete signals (Fig. 3.4.).

Analog signal in transmission systems is called a continuous electrical or optical signal F n (t), the parameters of which (amplitude, frequency or phase) vary according to the law of a continuous function of the time of the information source, for example, a speech message, a moving or still image, etc. Continuous signals can take on any values ​​(an infinite set) within certain limits.

Discrete signals- consist of individual elements that take on a finite number of different values. Analog discrete signals F d (t) can be obtained from continuous F n (t) using time sampling (through the interval T d), amplitude quantization, or both.

Digital signal F c (t) is formed as a group of pulses in the binary number system, corresponding to the amplitude of a level-quantized and time-discrete analog signal, while the presence of an electrical pulse corresponds to “1” in the binary number system, and its absence corresponds to “0”.

The main advantage of digital signals is their high noise immunity, since in the presence of noise and distortion during their transmission, it is enough to register the presence or absence of pulses at reception.

Thus, To obtain a digital signal, it is fundamentally necessary to perform three basic operations on continuous signal: time sampling, level quantization and coding.

Rice. 3.4. Varieties discrete signals and their differences in type of formation from an analog signal:

a) - discrete in time;

b) - discrete in level;

c) - discrete in time and level;

d) - digital binary signal.

Appendix to the lecture.

Signal(V theory of information and communication) - material storage medium, used for transmission messages V communication system. The signal can be generated, but its reception is not mandatory, unlike messages, which must be accepted by the receiving party, otherwise it is not a message. A signal can be any physical process whose parameters change in accordance with the transmitted message.

A signal, deterministic or random, is described mathematical model, a function characterizing the change in signal parameters. The mathematical model of representing a signal as a function of time is a fundamental concept theoretical radio engineering, which turned out to be fruitful for both analysis, and for synthesis radio engineering devices and systems.

In radio engineering, an alternative to a signal that carries useful information is noise- usually random function time, interacting (for example, by adding) with the signal and distorting it. The main task of theoretical radio engineering is to extract useful information from the signal with mandatory consideration of noise.

Concept signal allows abstract from a specific physical quantity , such as current, voltage, acoustic wave and consider outside the physical context phenomena associated with encoding information and extracting it from signals that are usually distorted noise. In research, the signal is often represented as a function of time, the parameters of which can be necessary information. The method of recording this function, as well as the method of recording interfering noise, is called mathematical signal model.

In connection with the concept of a signal, the following are formulated: basic principles cybernetics, as a concept about bandwidth communication channel developed Claude Shannon and about optimal reception, developed V. A. Kotelnikov.