Analog and discrete signal difference. Analog, discrete, digital signals

Lecture No. 1

"Analog, discrete and digital signals."

The two most fundamental concepts in this course are the concepts of signal and system.

Under the signalrefers to a physical process (for example, a time-varying voltage) that displays some information or message. Mathematically, a signal is described by a function of a certain type.

One-dimensional signals are described by a real or complex function defined on the interval of the real axis (usually the time axis). An example of a one-dimensional signal is the electric current in a microphone wire, which carries information about the perceived sound.

Signal x(t ) is called bounded if there is a positive number A , such that for anyone t.

Signal energy x(t ) is called the quantity

,(1.1)

If , then they say that the signal x(t ) has limited energy. Signals with limited energy have the property

If a signal has limited energy, then it is limited.

Signal strength x(t ) is called the quantity

,(1.2)

If , then they say that the signal x(t ) has limited power. Signals with limited power can take non-zero values ​​indefinitely.

In reality, signals with unlimited energy and power do not exist. Majority signals that exist in real nature are analog.

Analog signals are described by a continuous (or piecewise continuous) function, and the function itself and the argument t can take any values ​​on some intervals . In Fig. 1.1a shows an example of an analog signal that varies over time according to the law, where . Another example of an analog signal, shown in Figure 1.1b, varies over time according to the law.



An important example of an analog signal is the signal described by the so-called. "unit function", which is described by the expression

(1.3),

Where .

The graph of the unit function is shown in Fig. 1.2.


Function 1(t ) can be considered as the limit of the family of continuous functions 1(a, t ) when changing a parameter of this familya.

(1.4).

Family of graphs 1(a, t ) at different valuesapresented in Fig. 1.3.


In this case, function 1( t ) can be written as

(1.5).

Let us denote the derivative of 1(a, t ) as d(a,t).

(1.6).

Family of graphsd(a, t ) is presented in Fig. 1.4.



Area under the curved(a, t ) does not depend onaand is always equal to 1. Indeed

(1.7).

Function

(1.8)

called Dirac impulse function ord - function. Values d - functionsare equal to zero at all points except t =0. At t =0 d-function is equal to infinity, but in such a way that the area under the curved- function is equal to 1. Figure 1.5 shows the graph of the functiond(t) and d(t - t).


Let's note some propertiesd- Features:

1. (1.9).

This follows from the fact that only at t = t.

2. (1.10) .

In the integral, infinite limits can be replaced by finite ones, but so that the argument of the functiond(t - t) vanished within these limits.

(1.11).

3. Conversion Laplaced-functions

(1.12).

IN in particular, whent=0

(1.13).

4. Fourier transformd- functions. When p = j v from 1.13 we get

(1.14)

At t=0

(1.15),

those. range d- function is equal to 1.

Analog signal f(t ) is called periodic if there is a real number T, such that f (t + T)= f (t) for any t. In this case T is called the period of the signal. An example of a periodic signal is the signal presented in Fig. 1.2a, and T =1/ f . Another example of a periodic signal is the sequenced- functions described by the equation

(1.16)

schedulewhich is shown in Fig. 1.6.


Discrete signals differ from analog signals in that their values ​​are known only at discrete moments in time. Discrete signals are described by lattice functions - sequences -x d(nT), where T = const – sampling interval (period), n =0,1,2,…. The function itself x d(nT) can at discrete moments take arbitrary values ​​over a certain interval. These function values ​​are called samples or samples of the function. Another notation for the lattice function x ( nT) is x(n) or x n. In Fig. 1.7a and 1.7b show examples of lattice functions and . Subsequence x(n ) can be finite or infinite, depending on the interval of definition of the function.



The process of converting an analog signal into a discrete one is called time sampling. Mathematically, the process of time sampling can be described as modulation by an input analog signal of a sequenced- functions d T(t)

(1.17)

The process of restoring an analog signal from a discrete one is called time extrapolation.

For discrete sequences, the concepts of energy and power are also introduced. Energy of sequence x(n ) is called the quantity

,(1.18)

Power sequence x(n ) is called the quantity

,(1.19)

For discrete sequences, the same patterns regarding power and energy limitation remain as for continuous signals.

Periodiccalled a sequence x ( nT), satisfying the condition x ( nT)= x ( nT+ mNT), where m and N - whole numbers. Wherein N called the sequence period. It is enough to set a periodic sequence on a period interval, for example at .

Digital signals are discrete signals that at discrete moments in time can take only a finite series of discrete values ​​- quantization levels. The process of converting a discrete signal into a digital one is called quantization by level. Digital signals are described by quantized lattice functionsx ts(nT). Examples of digital signals are shown in Fig. 1.8a and 1.8b.



Relationship between lattice functionx d(nT) and quantized lattice function x ts(nT) is determined by the nonlinear quantization function x ts(nT)= Fk(x d(nT)). Each quantization level is coded with a number. Typically, binary coding is used for these purposes, so that the quantized samplesx ts(nT) are encoded as binary numbers with n discharges. Number of quantization levels N and the smallest number of binary digits m , with which all these levels can be encoded, are related by the relation

,(1.20)

Where int(x ) – the smallest integer not less than x.

Thus, quantization of discrete signals consists of representing the signal samplex d(nT) using a binary number containing m discharges. As a result of quantization, the sample is represented with an error, which is called the quantization error

.(1.21)

Quantization step Q determined by the weight of the least significant binary digit of the resulting number

.(1.22)

The main quantization methods are truncation and rounding.

Truncation to m -bit binary number consists of discarding all the low-order bits of the number except n seniors In this case, the truncation error. For positive numbers in any coding method . For negative numbers, the truncation error is non-negative when using the direct code, and the truncation error is non-positive when using the two's complement code. Thus, in all cases, the absolute value of the truncation error does not exceed the quantization step:

.(1.23)

The graph of the additional code truncation function is shown in Fig. 1.9, and the direct code – in Fig. 1.10.




Rounding differs from truncation in that, in addition to discarding the lower digits of the number, it also modifies m- th (junior non-discardable) digit of the number. Its modification is that it either remains unchanged or increases by one, depending on whether the discarded part of the number is larger or smaller. Rounding can practically be accomplished by adding one to ( m +1) – muridigit of the number with subsequent truncation of the resulting number to n discharges. The rounding error for all coding methods lies within and therefore

.(1.24)

The graph of the rounding function is shown in Fig. 1.11.



Consideration and use of various signals assumes the ability to measure the value of these signals at given points in time. Naturally, the question arises about the reliability (or, conversely, uncertainty) of measuring the value of signals. Deals with these issues information theory, the founder of which is K. Shannon. The main idea of ​​information theory is that information can be treated in much the same way as physical quantities such as mass and energy.

We usually characterize the accuracy of measurements by the numerical values ​​of the errors obtained during measurement or the estimated errors. In this case, the concepts of absolute and relative errors are used. If the measuring device has a measuring range from x 1 to x 2 , with absolute error± D, independent of the current value x measured quantity, then having received the measurement result in the form x n we are recording how is itx n± Dand is characterized by relative error.

Consideration of these same actions from the perspective of information theory is of a slightly different nature, differing in that all of the listed concepts are given a probabilistic, statistical meaning, and the result of the measurement is interpreted as a reduction in the area of ​​uncertainty of the measured value. In information theory, the fact that a measuring device has a measuring range from x 1 to x 2 means that when using this instrument, readings can only be obtained within the range of x 1 to x 2 . In other words, the probability of receiving samples less than x 1 or large x 2 , is equal to 0. The probability of receiving samples is somewhere in the range from x 1 to x 2 is equal to 1.

If we assume that all measurement results in the range from x 1 to x 2 are equally probable, i.e. Since the probability distribution density for different values ​​of the measured quantity along the entire scale of the device is the same, then from the point of view of information theory, our knowledge about the value of the measured quantity before measurement can be represented by a graph of the probability density distribution p (x).

Since the total probability of getting a reading is somewhere between x 1 to x 2 equals 1, then the curve must contain an area equal to 1, which means that

(1.25).

After the measurement, we obtain a device reading equal tox n. However, due to the instrument error equal to± D, we cannot claim that the measured quantity is exactly equalx n. Therefore we write the result in the formx n± D. This means that the actual value of the measured quantity x lies somewhere betweenx n- D before x n+ D. From the point of view of information theory, the result of our measurement is only that the area of ​​uncertainty has been reduced to a value of 2DAnd characterized much higher probability density

(1.26).

Obtaining any information about the quantity of interest to us consists, therefore, in reducing the uncertainty of its value.

As a characteristic of the uncertainty of the value of some random variable, K. Shannon introduced the concept entropy quantities x , which is calculated as

(1.27).

The units used to measure entropy depend on the choice of logarithm base in the given expressions. When using decimal logarithms, entropy is measured in so-called. decimal units or ditah. In the case of using binary logarithms, entropy is expressed in binary units or bits.

In most cases, the uncertainty of knowledge about the meaning of a signal is determined by the action of interference or noise. The misinformation effect of noise during signal transmission is determined by the entropy of noise as a random variable. If noise in a probabilistic sense does not depend on the transmitted signal, then, regardless of the signal’s statistics, a certain amount of entropy can be assigned to noise, which characterizes its disinformation effect. In this case, the system can be analyzed separately for noise and signal, which greatly simplifies the solution of this problem.

Shannon's theorem on the amount of information. If a signal with entropy is applied to the input of the information transmission channel H( x), and the noise in the channel has entropy H(D ) , then the amount of information at the channel output is determined as

(1.28).

If, in addition to the main signal transmission channel, there is an additional channel, then to correct errors arising from noise with entropy H ( D), through this channel it is necessary to transmit an additional amount of information, no less than

(1.29).

This data can be encoded in such a way that it is possible to correct all errors caused by noise, except for an arbitrarily small fraction of these errors.

In our case, for a uniformly distributed random variable, entropy is defined as

(1.30),

and the remaining one or conditional entropy measurement result after receiving the readingx n equal to

(1.31).

Hence, the resulting amount of information equal to the difference between the original and remaining entropy is equal to

(1.32).

When analyzing systems with digital signals, quantization errors are considered as a stationary random process with a uniform probability distribution over the range of the quantization error distribution. In Fig. 1.12a, b and c show the probability densities of the quantization error when rounding the complementary code, direct code and truncation, respectively.



Obviously, quantization is a nonlinear operation. However, the analysis uses a linear model of signal quantization, presented in Fig. 1.13.

m – bit digital signal, e( nT) – quantization error.

Probabilistic estimates of quantization errors are made by calculating the mathematical expectation

(1.33)

and variance

(1.34),

Wherep e– error probability density. For cases of rounding and truncation we will have

(1.35),

(1.36).

Time sampling and quantization by signal level are integral features of all microprocessor control systems, determined by the limited speed and finite bit capacity of the microprocessors used.

Each of us faces discreteness every day. This is one of the properties inherent in matter. Literally translated from Latin, the word discretus means discontinuity. For example, a discrete signal is a method of transmitting information when the carrier medium changes over time, accepting any of the existing list of valid values.

Of course, the term "discreteness" is used in a broader sense. In particular, progress in microelectronics is now aimed at the creation and development of SOC technology - “System on a Chip”. It is assumed that all the components that make up the device are closely integrated with each other on a single substrate. The opposite of this approach is discrete circuits, when the elements themselves are complete products, connected by communication lines.

It is perhaps impossible now to find a person who does not use a mobile phone or Skype on a computer. One of their tasks is the transmission of sound flow (in particular, voice). But since such sound is a continuous wave, it would require a high-bandwidth channel to transmit it directly. To solve this issue, it was proposed to use a discrete signal. It does not form a wave, but its digital representation (remember, we are talking about mobile phones and computers). Data values ​​are sampled from the wave at certain intervals. That is, a discrete signal is created. Its advantage is obvious: lower total and the ability to organize packet transmission. The target receiver combines all samples into a single block, generating the original wave. The longer the intervals between samples, the higher the likelihood of distortion of the original wave. Discretization is widely used in computing.

When talking about what a discrete signal is, one cannot help but use a wonderful analogy with an ordinary printed book. A person reading it receives a continuous flow of information. At the same time, the data contained in it is “encoded” in the form of certain sequences of letters - words - sentences. It turns out that the author forms a kind of discrete signal from an indivisible thought, since he expresses it by dividing it into blocks, using one or another encoding method (alphabet, language). The reader in this example gets the opportunity to perceive the author’s idea only after mentally combining the words into a stream of information.

You are probably reading this article on your computer screen. But even a monitor screen can serve as an example where discreteness and continuity are manifested. Let's remember the old models based on CRTs. In them, the image was formed by a sequence of frames that had to be “drawn” several dozen times per second. It is obvious that this device uses a discrete method of constructing a picture.

A discrete signal is the exact opposite of a continuous signal. The latter is a function of intensity versus time (if represented on a Cartesian plane). As already indicated, one example is It is characterized by frequency and amplitude, but is not naturally interrupted anywhere. Most natural processes are described in this way. Despite the fact that, after all, there are several ways to process a continuous (or analog) signal to reduce the data flow, in modern digital systems it is the discrete one that is most common. Partly due to the fact that it can be quite simply converted to the original one, regardless of the latter’s configuration. By the way, it is worth noting that the terms “discrete” and “digital” are almost equivalent.

In technical branches of knowledge, the term signal is

1) a technical means for transmitting circulation and using information.

2) the physical process of displaying an information message (changing any parameter of the information carrier)

3) the semantic content of a certain physical state or process.

Signal – information/messages/information about any processes/states or physical quantities of objects of the material world, expressed in a form convenient for transmission, processing, storage and use of this information.

From a mathematical point of view, a signal is a function, that is, the dependence of one quantity on another.

    Purpose of Signal Processing

The purpose of signal processing is considered to be the study of certain information information that is displayed in the form of target information and the transformation of this information into a form convenient for further use.

    Purpose of Signal Analysis

By “analysis” of signals we mean not only their purely mathematical transformations, but also drawing conclusions about the specific features of the corresponding processes and objects based on these transformations. The goals of signal analysis are usually: - Determination or evaluation of numerical parameters of signals (energy, average power, root mean square value, etc.). - Decomposition of signals into elementary components to compare the properties of different signals. - Comparison of the degree of proximity, “similarity”, “relatedness” of various signals, including with certain quantitative estimates.

    Signal registration

The concept of signal is inextricably linked with the term signal registration, the use of which is as broad and ambiguous as the term signal itself. In the most general sense, this term can be understood as the operation of isolating a signal and converting it into a form convenient for further use, processing and perception.. Thus, when receiving information about the physical properties of any objects, signal registration is understood as the process of measuring the physical properties of an object and transferring the measurement results to the material carrier of the signal or the direct energy transformation of any properties of the object into information parameters of the material carrier of the signal (usually electrical ). But the term signal recording is also widely used for the processes of separating already formed signals carrying certain information from the sum of other signals (radio communications, telemetry, etc.), and for the processes of recording signals on long-term memory media, and for many other processes related with signal processing.

    Internal and external noise sources

Noises, as a rule, are stochastic (random) in nature. Interference includes distortion of useful signals under the influence of various destabilizing factors (electrical interference, vibration, types of noise and interference are distinguished by their sources, energy spectrum). Depending on the nature of the impact on the signal, sources of noise and interference can be internal or external.

Internal interference is inherent in the physical nature of signal sources and detectors, as well as material media. External sources of interference can be of artificial or natural origin. Artificial noise includes industrial noise and interference from operating equipment.

    What does the mathematical model of the signal provide?

The theory of analysis and processing of physical data is based on mathematical models of the corresponding physical fields and physical processes on the basis of which mathematical models of signals are created; they make it possible to generally abstract from the physical nature to judge the properties of signals, predict changes in signals under various conditions, in addition, it becomes possible to ignore a large number secondary signs. Knowledge of mathematical models makes it possible to classify signals according to various criteria (for example, signals are divided into deterministic and stochastic).

    Signal classification

Signal classification carried out on the basis of essential features of the corresponding mathematical models of signals . All signals are divided into two large groups: deterministic and random.

    Harmonic signals

Harmonic signals (sinusoidal), are described by the following formulas:

s(t) = A×sin (2f o t+f) = A×sin ( o t+f), s(t) = A×cos( o t+), (1.1.1)

Rice. 5. Harmonic signal and spectrum of its amplitudes

where A, f o ,  o , f are constant values ​​that can act as information parameters of the signal: A is the signal amplitude, f o is the cyclic frequency in hertz,  o = 2f o is the angular frequency in radians ,  and f are the initial phase angles in radians. The period of one oscillation is T = 1/f o = 2/ o. When j = f-p/2, sine and cosine functions describe the same signal. The frequency spectrum of the signal is represented by the amplitude and initial phase value of the frequency f o (at t = 0).

    Polyharmonic signals

Polyharmonic signals constitute the most widespread group of periodic signals and are described by the sum of harmonic oscillations:

s(t) =A n sin (2f n t+ n) ≡ A n sin (2B n f p t+ n), B n ∈ I, (1.1.2)

or directly by the function s(t) = y(t ± kT p), k = 1,2,3,..., where T p is the period of one complete oscillation of the signal y(t), specified over one period. The value f p =1/T p is called the fundamental oscillation frequency.

Rice. 6. Signal model Fig. 7. Signal spectrum

Polyharmonic signals are the sum of a certain constant component (f o =0) and an arbitrary (in the limit - infinite) number of harmonic components with arbitrary values ​​of amplitudes A n and phases j n , with frequencies that are multiples of the fundamental frequency f p . In other words, on the period of the fundamental frequency f p , which is equal to or a multiple of the minimum harmonic frequency, a multiple number of periods of all harmonics fits, which creates the periodicity of the signal repetition. The frequency spectrum of polyharmonic signals is discrete, and therefore the second common mathematical representation of signals is in the form of spectra (Fourier series).

    Almostperiodic signal

Almost periodic signals are close in their form to polyharmonic. They also represent the sum of two or more harmonic signals (in the limit - to infinity), but not with multiples, but with arbitrary frequencies, the ratios of which (at least two frequencies minimum) do not relate to rational numbers, as a result of which the fundamental period of the total oscillations is infinite big rice 9.

Rice. 9. Almost periodic signal and spectrum of its amplitudes

    Analog signals

Analog signal (analog signal) is a continuous or piecewise continuous function y=x(t) of a continuous argument, i.e. both the function itself and its argument can take any value within a certain interval y 1 £y £ y 2 , t 1 £t £ t 2 . If the intervals of signal values ​​or its independent variables are not limited, then by default they are assumed to be equal to -¥ to +¥.

The set of possible signal values ​​forms a continuum - a continuous space in which any signal point can be determined with infinity accuracy.

    Sources of analog signals are physical processes and phenomena; examples of analog signals are most often given by changes in the strength of the electric, magnetic and electromagnetic fields over time.

Discrete signals

Discrete signal

Rice. 13. Discrete signal Discrete signal

    (discrete signal) – fig. 13 in its values ​​is also a continuous function, but defined only by discrete values ​​of the argument. According to the set of its values, it is finite (countable) and is described by a discrete sequence of samples (samples) y(nt), where y 1 £y £ y 2, t is the interval between samples (interval or sampling step, sample time) , n = 0, 1, 2,...,N. The reciprocal of the sampling step: f = 1/t is called the sampling frequency. If a discrete signal is obtained by sampling an analog signal, then it represents a sequence of samples whose values ​​are exactly equal to the values ​​of the original signal.

Digital signal Digital signal

(digital signal) is quantized in its values ​​and discrete in its argument. It is described by a quantized lattice function y n = Q k, where Q k is a quantization function with the number of quantization levels k, and the quantization intervals can be either uniform or uneven, for example, logarithmic. A digital signal is specified, as a rule, in the form of a discrete series of numerical data - a numerical array of successive values ​​of the argument at t = const, but in the general case the signal can also be specified in the form of a table for arbitrary values ​​of the argument.

Essentially, a digital signal in its values ​​(counts) is a formalized version of a discrete signal when the latter’s counts are rounded to a certain number of digits, as shown in Fig. 14. A digital signal is finite in its many values. The process of converting analog samples with infinite values ​​into a finite number of digital values ​​is called level quantization, and the rounding errors of samples (discarded values) that arise during quantization are called noise or quantization errors.

    Kotelnikov-Shannon theorem

Physical meaning of the theorem Kotelnikov-Shannon: if the maximum frequency in the signal is f, then it is enough to have at least 2 samples with known values ​​of t 1 and t 2 on one period of this harmonic, and it becomes possible to write a system of two equations (y 1 =a cos 2ft 1 and y 2 =a cos 2ft 2) and solve the system with respect to 2 unknowns - amplitude a and frequency f of this harmonic. Therefore, the sampling frequency must be 2 times the maximum frequency f in the signal. For lower frequencies this condition will be satisfied automatically.

In practice, this theorem is widely used, for example, in converting audio recordings. The range of frequencies perceived by humans is from 20 Hz to 20 kHz; therefore, for lossless conversion it is necessary to perform sampling at a frequency of more than 40 kHz; therefore, cd dvd mp3 is digitized at a frequency of 44.1 kHz. The quantization operation (analog-to-digital conversion of the ADC ADC) consists of converting a discrete signal into a digital signal encoded in a binary system. dead reckoning

    System concept

A system for any purpose always has an input to which an input signal or input action (in the general case multidimensional) is applied and an output from which the processed output signal is removed. If the design of the system and internal transformation operations are not of fundamental importance, then the system as a whole can be perceived as a black box in a formalized form.

A formalized system represents a specific system operator(algorithm) for converting the input signal – impact s(t), into the signal at the system output y(t) – response or output reaction systems. Symbolic designation of the transformation operation:

For deterministic input signals, the relationship between input and output signals is uniquely specified by the system operator.

    System Operatort

System operator T is a rule (set of rules, algorithm) for converting signal s(t) into signal y(t). For well-known signal conversion operations, extended symbols of transformation operators are also used, where the second symbol and special indices indicate a specific type of operation (for example, TF - Fourier transform, TF -1 - inverse Fourier transform).

    Linear and non-linear systems

In the case of implementing a random input signal at the input of the system, there is also a one-to-one correspondence between the processes at the input and output, but in this case the statistical characteristics of the output signal change. Any signal transformations are accompanied by changes in their spectrum and, according to the nature of these changes, they are divided into 2 types: linear and nonlinear

Nonlinear is when new harmonic components appear in the signal spectrum, and when the signals change linearly, the amplitudes of the component spectrum change. Both types of changes can occur with the preservation and distortion of useful information. Linear systems constitute the main class of signal processing systems.

The term linearity means that the signal conversion system must have an arbitrary, but necessarily linear relationship between the input and output signals.

A system is considered linear if, within a specified area of ​​input and output signals, its response to input signals is additive (the principle of superposition of signals is fulfilled) and homogeneous (the principle of proportional similarity is fulfilled).

    Additivity principle

Principle additivity requires that the reaction to the sum of two input signals be equal to the sum of the reactions to each signal separately:

T = T+T.

    Principle of homogeneity

Principle uniformity or proportional similarity requires maintaining the unambiguity of the transformation scale for any amplitude of the input signal:

T= c  T.

    Basic system operations

The basic linear operations from which any linear transformation operators can be formed include the operations of scalar multiplication, shift and addition of signals:

y(t) = b  x(t), y(t) = x(t-t), y(t) = a(t)+b(t).

Rice. 11.1.1. System Operation Graphics

Addition and multiplication operations are linear only for discrete and analog signals.

For systems with a dimension of 2 or more, there is also another basic operation called the operation spatial masking, which can be considered as a generalization of scalar multiplication. So, for two-dimensional systems:

z(x,y) = c(x,y)u(x,y),

where u(x,y) is a two-dimensional input signal, c(x,y) is a spatial mask of constant (weighting) coefficients. Spatial masking is the element-wise product of the signal values ​​with the mask coefficients.

    Differential equations as a universal tool for studying signals

Differential equations are a universal tool for specifying a specific relationship between input and output signals, both in one-dimensional and multidimensional systems, and can describe the system both in real time and a posteriori. Thus, in an analog one-dimensional linear system, such a relationship is usually expressed by a linear differential equation

a m = b n .

(11.1.1)

When normalized to a o = 1, it follows

y(t) =b n –a m .

(11.1.1")

Essentially, the right side of this expression in the most general mathematical form displays the content of the input signal conversion operation, i.e. the operator for transforming the input signal into the output signal is specified. To uniquely solve equations (11.1.1), in addition to the input signal s(t), certain initial conditions must be specified, for example, the values ​​of the solution y(0) and its time derivative y"(0) at the initial time.

A similar connection in a digital system is described by difference equations

a m y((k-m)t) =b n s((k-n)t).< 0. Интервал дискретизации в цифровых последовательностях отсчетов обычно принимается равным 1, т.к. выполняет только роль масштабного множителя.

    (11.1.2)

y(kt) =b n s((k-n)t) –a m y((k-m)t).

    (11.1.2")

Backbone wide-area networks are used to form peer-to-peer connections between large local networks belonging to large departments of an enterprise. Backbone territorial networks must provide high throughput, since the backbone combines the flows of a large number of subnets. In addition, backbone networks must be constantly available, that is, provide a very high availability factor, since they carry the traffic of many business-critical applications. Due to the special importance of highways, they can be forgiven for their high cost. Since an enterprise usually does not have many large networks, backbone networks are not required to maintain an extensive access infrastructure.

Access networks are understood as territorial networks necessary for connecting small local networks and individual remote computers with the central local network of an enterprise. If great attention has always been paid to the organization of backbone connections when creating a corporate network, then the organization of remote access for enterprise employees has become a strategically important issue only recently. For many types of enterprise activities, quick access to corporate information from any geographical location determines the quality of decision-making by its employees. The importance of this factor is growing with the increase in the number of employees working at home (telecommuters) who are often on business trips, and with the increase in the number of small branches of enterprises located in different cities and, perhaps, different countries.

    Multiplexing

Multiplexing is the use of one communication channel to transmit data to several subscribers. Communication lines (channel) consist of a physical medium through which information signals of data transmission equipment are transmitted.

    Types of communication channels

    simplex - when the receiver communicates with the transmitter over one channel, with unidirectional transmission of information (for example, in television and radio broadcasting networks);

    half-duplex - when two communication nodes are connected by one channel, through which information is transmitted alternately in one direction, then in the opposite direction (in information-reference and request-response systems);

    duplex - allows you to transmit data simultaneously in two directions through the use of a four-wire communication line (two wires for transmitting, the other two for receiving data), or two frequency bands.

    Characteristics of communication lines

The main characteristics of the communication channel - throughput and reliability of data transmission

Channel capacity (the amount of information transmitted per unit of time) is estimated by the number of bits of data transmitted over the channel per second BIT/sec

The reliability of data transmission is assessed by the bit error rate (BER), which is determined by the probability of distortion of the transmitted data bit. The bit error rate for communication channels without additional error protection is 10 -4 to 10 -6

    Main characteristics of cables

Computer networks use cables that comply with international standards ISO 11801. These standards regulate the following basic characteristics of cables:

– attenuation (dB/m);

– resistance of the cable to internal sources of interference (if there is more than one pair of wires in the cable);

Impedance (characteristic impedance) - the effective input resistance of the cable for alternating current;

The level of external EM radiation in the conductor characterizes the cable's noise immunity.

The degree of attenuation of external interference from various sources. The most widely used types of cables are unshielded twisted pair / shielded twisted pair / coaxial cable / fiber optic.

Unshielded-

Shielded is better than unshielded

Cable (RG8 and RG11 - thick coaxial cable has a characteristic impedance of 8 Ohms and an outer diameter of 2.5 cm)

RG58 & RG59 cables – thin coaxial cables with a characteristic impedance of 75 Ohms

    Data transmission media (wired and wireless)

Depending on the physical medium of data transmission, communication lines can be divided:

    wired communication lines without insulating and shielding braids;

    cable, where communication lines such as twisted pair cables, coaxial cables or fiber optic cables are used to transmit signals;

    wireless (radio channels of terrestrial and satellite communications), using electromagnetic waves that propagate over the air to transmit signals.

Discrete signals naturally arise in cases where the message source provides information at fixed points in time. An example is information about air temperature transmitted by broadcasting stations several times a day. The property of a discrete signal is manifested here extremely clearly: in the pauses between messages there is no information about the temperature. In fact, the air temperature changes smoothly over time, so that the measurement results arise from the sampling of a continuous signal - an operation that records the reference values.

Discrete signals have acquired particular importance in recent decades under the influence of improvements in communication technology and the development of methods for processing information with high-speed computing devices. Great progress has been made in the development and use of specialized devices for processing discrete signals, the so-called digital filters.

This chapter is devoted to consideration of the principles of mathematical description of discrete signals, as well as the theoretical foundations for the construction of linear devices for their processing.

15.1. Discrete Signal Models

The distinction between discrete and analog (continuous) signals was emphasized in Chap. 1 when classifying radio signals. Let us recall the main property of a discrete signal: its values ​​are not determined at all times, but only at a countable set of points. If an analog signal has a mathematical model of the form of a continuous or piecewise continuous function, then the corresponding discrete signal is a sequence of sample signal values ​​at points, respectively.

Sampling sequence.

In practice, as a rule, samples of discrete signals are taken in time at an equal interval A, called the sampling interval (step):

The sampling operation, i.e. the transition from an analog signal to a discrete signal, can be described by introducing the generalized function

called the sampling sequence.

Obviously, a discrete signal is a functional (see Chapter 1), defined on the set of all possible analog signals and equal to the scalar product of the function

Formula (15.3) indicates the path to the practical implementation of a device for sampling an analog signal. The operation of the sampler is based on the gating operation (see Chapter 12) - multiplication of the processed signal and the “comb” function. Since the duration of the individual pulses that make up the sampling sequence is zero, sample values ​​of the processed analog signal appear at the output of an ideal sampler at equally spaced moments in time .

Rice. 15.1. Block diagram of a pulse modulator

Modulated pulse sequences.

Discrete signals began to be used back in the 40s when creating radio systems with pulse modulation. This type of modulation differs in that a periodic sequence of short pulses serves as a “carrier oscillation” instead of a harmonic signal.

A pulse modulator (Fig. 15.1) is a device with two inputs, one of which receives the original analog signal. The other input receives short synchronizing pulses with a repetition interval. The modulator is constructed in such a way that at the moment of applying each synchronizing pulse, the instantaneous value of the signal x(t) is measured. A sequence of pulses appears at the output of the modulator, each of which has an area proportional to the corresponding reference value of the analog signal.

The signal at the output of the pulse modulator will be called a modulated pulse sequence (MPS). Naturally, the discrete signal is a mathematical model of the MIP.

Note that from a fundamental point of view, the nature of the impulses from which the MIP is composed is indifferent. In particular, these pulses can have the same duration, while their amplitude is proportional to the sample values ​​of the signal being sampled. This type of continuous signal conversion is called pulse amplitude modulation (PAM). Another method is possible - pulse width modulation (PWM). Here, the amplitudes of the pulses at the modulator output are constant, and their duration (width) is proportional to the instantaneous values ​​of the analog oscillation.

The choice of one or another pulse modulation method is dictated by a number of technical considerations, the convenience of circuit implementation, as well as the characteristic features of the transmitted signals. For example, it is inappropriate to use AIM if the useful signal varies over a very wide range, i.e., as is often said, has a wide dynamic range. For undistorted transmission of such a signal, a transmitter with a strictly linear amplitude characteristic is required. Creating such a transmitter is an independent, technically complex problem. PWM systems do not impose any requirements on the linearity of the amplitude characteristics of the transmitting device. However, their circuit implementation may be somewhat more complicated compared to AIM systems.

A mathematical model of an ideal MIP can be obtained as follows. Let's consider the formula for the dynamic representation of a signal (see Chapter 1):

Since the MIP is defined only at points, integration in formula (15.4) should be replaced by summation over index k. The role of the differential will be played by the sampling interval (step). Then the mathematical model of a modulated pulse sequence formed by infinitely short pulses will be given by the expression

where are sample values ​​of the analog signal.

Spectral density of a modulated pulse sequence.

Let us examine the spectrum of the signal arising at the output of an ideal pulse modulator and described by expression (15.5).

Note that a signal of the type MIP, up to the proportionality coefficient A, is equal to the product of the function and the sampling sequence

It is known that the spectrum of the product of two signals is proportional to the convolution of their spectral densities (see Chapter 2). Therefore, the laws of correspondence between signals and spectra are known:

then the spectral density of the MIP signal

To find the spectral density of the sampling sequence, we expand the periodic function into a complex Fourier series:

The coefficients of this series

Turning to formula (2.44), we obtain

that is, the spectrum of the sampling sequence consists of an infinite collection of delta pulses in the frequency domain. This spectral density is a periodic function with a period

Finally, substituting formula (15.8) into (15.7) and changing the order of the integration and summation operations, we find

So, the spectrum of the signal obtained as a result of ideal sampling with infinitely short gate pulses is the sum of an infinite number of “copies” of the spectrum of the original analog signal. Copies are located on the frequency axis at equal intervals equal to the value of the angular frequency of the first harmonic of the sampling pulse sequence (Fig. 15.2, a, b).

Rice. 15.2. Spectral density of a modulated pulse sequence at different values ​​of the upper limit frequency: a - the upper limit frequency is high; b - the upper limit frequency is low (the color indicates the spectral density of the original signal subjected to sampling)

Reconstruction of a continuous signal from a modulated pulse sequence.

In what follows, we will assume that the real signal has a low-frequency spectrum, symmetrical with respect to the point and limited by the upper limit frequency. From Fig. 15.2, b it follows that if , then individual copies of the spectrum do not overlap each other.

Therefore, an analog signal with such a spectrum, subjected to pulse sampling, can be completely accurately restored using an ideal low-pass filter, the input of which is a pulse sequence of the form (15.5). In this case, the largest permissible sampling interval is , which is consistent with Kotelnikov’s theorem.

Indeed, let the filter restoring a continuous signal have a frequency transfer coefficient

The impulse response of this filter is described by the expression

Taking into account that the MIP signal of the form (15.5) is a weighted sum of delta pulses, we find the response at the output of the reconstruction filter

This signal, up to a scale factor, repeats the original oscillation with a limited spectrum.

An ideal low-pass filter is physically unrealizable and can only serve as a theoretical model to explain the principle of reconstructing a message from its discrete pulse samples. A real low-pass filter has an frequency response that either covers several lobes of the MIP spectral diagram, or, concentrating near the zero frequency, turns out to be significantly narrower than the central lobe of the spectrum. For example in Fig. Figure 15.3, b-f shows curves characterizing the signal at the output of the RC circuit used as a reconstruction filter (Fig. 15.3, a).

Rice. 15.3. Reconstruction of a continuous signal from its pulse samples using an RC circuit: a - filter circuit; b - discrete input signal; c, d - frequency response of the filter and the signal at its output in the case of ; d, e - the same, for the case

From the above graphs it can be seen that a real reconstruction filter inevitably distorts the input oscillation.

Note that to reconstruct the signal, you can use either the central or any side lobe of the spectral diagram.

Determination of the spectrum of an analog signal from a set of samples.

Having the MIP representation, you can not only restore the analog signal, but also find its spectral density. To do this, you should first of all directly connect the spectral density of the SMIP with the reference values:

(15.13)

This formula exhaustively solves the problem posed under the above limitation.

Analog, discrete and digital signals

INTRODUCTION TO DIGITAL SIGNAL PROCESSING

Digital signal processing (DSP or digital signal processing) is one of the newest and most powerful technologies that is being actively implemented in a wide range of fields of science and technology, such as communications, meteorology, radar and sonar, medical imaging, digital audio and television broadcasting, exploration of oil and gas fields, etc. We can say that there is a widespread and deep penetration of digital signal processing technologies into all spheres of human activity. Today, DSP technology is one of the basic knowledge that is necessary for scientists and engineers in all industries without exception.

Signals

What is a signal? In the most general formulation, this is the dependence of one quantity on another. That is, from a mathematical point of view, the signal is a function. Dependencies on time are most often considered. The physical nature of the signal may be different. Very often this is electrical voltage, less often – current.

Signal presentation forms:

1. temporary;

2. spectral (in the frequency domain).

The cost of digital data processing is less than analogue and continues to decline, while the performance of computing operations is continuously increasing. It is also important that DSP systems are highly flexible. They can be supplemented with new programs and reprogrammed to perform different operations without changing the equipment. Therefore, interest in scientific and applied issues of digital signal processing is growing in all branches of science and technology.

PREFACE TO DIGITAL SIGNAL PROCESSING

Discrete signals

The essence of digital processing is that physical signal(voltage, current, etc.) is converted into a sequence numbers, which is then subjected to mathematical transformations into a computer.

Analog, discrete and digital signals

The original physical signal is a continuous function of time. Such signals, determined at all times t, are called analog.

What signal is called digital? Let's consider some analog signal (Fig. 1.1 a). It is specified continuously over the entire time interval under consideration. An analog signal is considered to be absolutely accurate, unless measurement errors are taken into account.

Rice. 1.1 a) Analog signal

Rice. 1.1 b) Sampled signal


Rice. 1.1 c) Quantized signal

In order to receive digital signal, you need to perform two operations - sampling and quantization. The process of converting an analog signal into a sequence of samples is called sampling, and the result of such a transformation is discrete signal.T. arr., sampling consists in compiling a sample from an analog signal (Fig. 1.1 b), each element of which is called countdown, will be separated in time from neighboring samples over a certain interval T, called sampling interval or (since the sampling interval is often unchanged) – sampling period. The reciprocal of the sampling period is called sampling rate and is defined as:

(1.1)

When processing a signal in a computing device, its samples are represented in the form of binary numbers with a limited number of bits. As a result, the samples can only take on a finite set of values ​​and, therefore, when presenting a signal, it inevitably rounds off. The process of converting signal samples into numbers is called quantization. The resulting rounding errors are called errors or quantization noise. Thus, quantization is the reduction of the levels of the sampled signal to a certain grid (Fig. 1.1 c), most often by the usual rounding up. A signal discrete in time and quantized in level will be digital.

The conditions under which it is possible to completely restore an analog signal from its digital equivalent while preserving all the information originally contained in the signal are expressed by the theorems of Nyquist, Kotelnikov, and Shannon, the essence of which is almost the same. To sample an analog signal with full preservation of information in its digital equivalent, the maximum frequencies in the analog signal must be no less than half the sampling frequency, that is, f max £ (1/2)f d, i.e. There must be at least two samples per period of the maximum frequency. If this condition is violated, the effect of masking (substitution) of actual frequencies with lower frequencies occurs in the digital signal. In this case, instead of the actual one, an “apparent” frequency is recorded in the digital signal, and, therefore, restoration of the actual frequency in the analog signal becomes impossible. The reconstructed signal will appear as if frequencies above half the sampling frequency have reflected from the frequency (1/2)f d to the lower part of the spectrum and superimposed on the frequencies already present in that part of the spectrum. This effect is called aliasing or aliasing(aliasing). A clear example of aliasing is an illusion that is quite often encountered in movies - a car wheel begins to rotate against its movement if between successive frames (analogous to the sampling rate) the wheel makes more than half a revolution.

Converting the signal to digital form performed by analog-to-digital converters (ADCs). As a rule, they use a binary number system with a certain number of digits on a uniform scale. Increasing the number of bits improves the accuracy of measurements and expands the dynamic range of measured signals. Information lost due to a lack of ADC bits is irrecoverable, and there are only estimates of the resulting error in the “rounding” of samples, for example, through the noise power generated by an error in the last ADC bit. For this purpose, the concept of signal-to-noise ratio is used - the ratio of signal power to noise power (in decibels). The most commonly used are 8-, 10-, 12-, 16-, 20- and 24-bit ADCs. Each additional digit improves the signal-to-noise ratio by 6 decibels. However, increasing the number of bits reduces the sampling rate and increases the cost of the equipment. An important aspect is also the dynamic range, determined by the maximum and minimum values ​​of the signal.

Digital Signal Processing is performed either by special processors or on mainframe computers using special programs. Easiest to consider linear systems. Linear are called systems for which the principle of superposition takes place (the response to the sum of input signals is equal to the sum of responses to each signal separately) and homogeneity (a change in the amplitude of the input signal causes a proportional change in the output signal).



If the input signal x(t-t 0) generates a unique output signal y(t-t 0) for any shift t 0, then the system is called time invariant. Its properties can be studied at any arbitrary time. To describe a linear system, a special input signal is introduced - single impulse(impulse function).

Single impulse(single count) u 0(n) (Fig. 1.2):

Rice. 1.2. Single impulse

Due to the properties of superposition and homogeneity, any input signal can be represented as a sum of such pulses supplied at different times and multiplied by the corresponding coefficients. The output signal of the system in this case is the sum of the responses to these pulses. The response to a unit pulse (pulse with unit amplitude) is called impulse response of the systemh(n). Knowledge of the impulse response allows you to analyze the passage of any signal through a discrete system. Indeed, an arbitrary signal (x(n)) can be represented as a linear combination of unit samples.