Analog-to-digital converters. Static and dynamic parameters of the ADC. Dynamic parameters of the DAC. Dynamic parameters of the ADC

Send your good work in the knowledge base is simple. Use the form below

Good work to the site">

Students, graduate students, young scientists who use the knowledge base in their studies and work will be very grateful to you.

  • CONTENT 2
  • INconducting 3
  • 1. Technical task 6
  • 2. Development and description of a system of measuring channels for determining static and dynamic characteristics 8
  • 2.1 Development of the principle of selection and standardization of static and dynamic characteristics of measuring channels of measuring instruments 8
  • 2.2 Development of complexes of standardized metrological characteristics 12
  • 3. DEVELOPMENT OF METROLOGICAL MEASURING INSTRUMENTS 16
  • 3.1 Development of metrological reliability of measuring instruments. 16
  • 3.2 Changes in metrological characteristics of means 19
  • measurements during operation 19
  • 3.3 Development of metrological standardization models 22
  • characteristics 22
  • 4. CLASSIFICATION OF SIGNALS 26
  • 5. Channel development 30
  • 5.1Development of a channel model 30
  • 5.2 Development of a measuring channel model 30
  • LITERATURE 35

Introduction

One of the main forms of state metrological supervision and departmental control aimed at ensuring the uniformity of measurements in the country, as mentioned earlier, is the verification of measuring instruments. Instruments released from production and repair, received from abroad, as well as those in operation and storage are subject to verification. The basic requirements for the organization and procedure for verification of measuring instruments are established by GOST “GSI. Verification of measuring instruments. Organization and procedure.” The term “verification” was introduced by GOST “GSI. Metrology. Terms and definitions” as “the determination by a metrological body of the errors of a measuring instrument and the establishment of its suitability for use.” In some cases, during verification, instead of determining error values, they check whether the error is within acceptable limits. Thus, verification of measuring instruments is carried out to establish their suitability for use. Those measuring instruments are considered suitable for use during a certain verification interval, the verification of which confirms their compliance with the metrological and technical requirements for this measuring instrument. Measuring instruments are subjected to primary, periodic, extraordinary, inspection and expert verification. Instruments undergo primary verification upon release from production or repair, as well as instruments received for import. Instruments in operation or storage are subject to periodic verification at certain calibration intervals established to ensure the suitability of the instrument for use for the period between verifications. Inspection verification is carried out to determine the suitability for use of measuring instruments in the implementation of state supervision and departmental metrological control over the condition and use of measuring instruments. Expert verification is performed when controversial issues on metrological characteristics (MX), serviceability of measuring instruments and their suitability for use. Metrological certification is a set of activities to study the metrological characteristics and properties of a measuring instrument in order to make a decision on the suitability of its use as a reference instrument. Usually, for metrological certification, a special program of work is drawn up, the main stages of which are: experimental determination of metrological characteristics; analysis of the causes of failures; establishing a verification interval, etc. Metrological certification of measuring instruments used as reference ones is carried out before commissioning, after repair and, if necessary, changing the category of the reference measuring instrument. The results of metrological certification are documented with appropriate documents (protocols, certificates, notices of the unsuitability of the measuring instrument). The characteristics of the types of measuring instruments used determine the methods for their verification.

In the practice of calibration laboratories, various methods for calibrating measuring instruments are known, which for unification are reduced to the following:

* direct comparison using a comparator (i.e. using comparison tools);

* direct measurement method;

* method of indirect measurements;

* method of independent verification (i.e. verification of measuring instruments relative values, which does not require transfer of unit sizes).

Verification of measuring systems is carried out by state metrological bodies called the State Metrological Service. The activities of the State Metrological Service are aimed at solving scientific and technical problems of metrology and implementing the necessary legislative and control functions, such as: establishing units of physical quantities approved for use; creation of exemplary measuring instruments, methods and measuring instruments highest precision; development of all-Union verification schemes; determination of physical constants; development of measurement theory, error estimation methods, etc. The tasks facing the State Metrological Service are solved with the help of the State System for Ensuring the Uniformity of Measurements (GSI). The state system for ensuring the uniformity of measurements is the regulatory and legal basis for metrological support of scientific and practical activities in terms of assessing and ensuring measurement accuracy. It is a set of regulatory and technical documents that establish a unified nomenclature, methods for presenting and assessing the metrological characteristics of measuring instruments, rules for standardization and certification of measurements, registration of their results, requirements for state tests, verification and examination of measuring instruments. Main regulatory and technical documents state system ensuring the uniformity of measurements are state standards. Based on these basic standards, regulatory and technical documents are developed that specify General requirements basic standards for various industries, measurement areas and measurement techniques.

1. Technical specifications

1.1 Development and description of a system of measuring channels to determine static and dynamic characteristics.

1.2 Materials of scientific and methodological developments of the ISIT department

1.3 Purpose and purpose

1.3.1 This system is designed to determine the characteristic instrumental components of measurement errors.

1.3.2 Develop a measuring system information system allowing you to automatically obtain the necessary information, process it and issue it in the required form.

1.4 System requirements

1.4.1 The rules for selecting sets of standardized metrological characteristics for measuring instruments and methods for their standardization are determined by the GOST 8.009 - 84 standard.

1.4.2 Set of standardized metrological characteristics:

1. measures and digital-to-analog converters;

2. measuring and recording instruments;

3. analog and analog-to-digital measuring converters.

1.4.3 Instrumental error of the first model of normalized metrological characteristics:

Random component;

Dynamic error;

1.4.4 Instrumental error of the second model of normalized metrological characteristics:

where is the main SI error without breaking it into components.

1.4.5 Compliance of models of standardized metrological characteristics with GOST 8.009-84 on the formation of complexes of standardized metrological characteristics.

2. Development and description of a system of measuring channels to determine static and dynamic characteristics

2.1 Development of the principle of selection and standardization of static and dynamic characteristics of measuring channels of measuring instruments

When using SI, it is fundamentally important to know the degree to which the information being measured, contained in the output signal, corresponds to its true value. For this purpose, certain metrological characteristics (MX) are introduced and standardized for each SI.

Metrological characteristics are characteristics of the properties of a measuring instrument that influence the measurement result and its errors. The characteristics established by regulatory and technical documents are called standardized, and those determined experimentally are called valid. The MX nomenclature, the rules for selecting standardized MX complexes for measuring instruments and the methods for their standardization are determined by the GOST 8.009-84 standard "GSI. Standardized metrological characteristics of measuring instruments."

Metrological characteristics of SI allow:

determine measurement results and calculate estimates of the characteristics of the instrumental component of the measurement error in real conditions of SI application;

calculate MX channels of measuring systems consisting of a number of measuring instruments with known MX;

produce optimal choice SI, providing the required quality of measurements under known conditions of their use;

compare SI various types taking into account the conditions of use.

When developing principles for the selection and standardization of measuring instruments, it is necessary to adhere to a number of provisions outlined below.

1. The main condition for the possibility of solving all of the listed problems is the presence of an unambiguous connection between normalized MX and instrumental errors. This connection is established through a mathematical model of the instrumental component of the error, in which the normalized MX must be arguments. It is important that the MX nomenclature and methods of expressing them are optimal. Experience in operating various SIs shows that it is advisable to normalize the MX complex, which, on the one hand, should not be very large, and on the other hand, each standardized MX must reflect the specific properties of the SI and, if necessary, can be controlled.

Standardization of MX measuring instruments should be carried out on the basis of uniform theoretical premises. This is due to the fact that measuring instruments based on different principles can participate in measurement processes.

Normalized MX must be expressed in such a form that with their help it is possible to reasonably solve almost any measurement problems and at the same time it is quite simple to control the measuring instruments for compliance with these characteristics.

Normalized MX must provide the possibility of statistical integration and summation of the components of the instrumental measurement error.

IN general case it can be defined as the sum (combination) of the following error components:

0 (t), due to the difference between the actual conversion function under normal conditions and the nominal one assigned by the relevant documents to this type of SI. This error is called the main error, caused by the reaction of the SI to changes in external influencing quantities and informative parameters input signal relative to their nominal values. This error is called additional;

dyn, caused by the reaction of the SI to the rate (frequency) of change of the input signal. This component, called dynamic error, depends both on the dynamic properties of the measuring instruments and on the frequency spectrum of the input signal;

int , caused by the interaction of the measuring instrument with the measurement object or with other measuring instruments connected in series with it in the measuring system. This error depends on the characteristics of the parameters of the SI input circuit and the output circuit of the measurement object.

Thus, the instrumental component of the SI error can be represented as

where * is a symbol for the statistical combination of components.

The first two components represent the static error of the SI, and the third is the dynamic one. Of these, only the main error is determined by the properties of SI. Additional and dynamic errors depend both on the properties of the SI itself and on some other reasons ( external conditions, parameters measuring signal and etc.).

The requirements for the universality and simplicity of the statistical combination of the components of the instrumental error determine the need for their statistical independence - non-correlation. However, the assumption of the independence of these components is not always true.

Isolating the dynamic error of the SI as a summed component is permissible only in a particular, but very common case, when the SI can be considered a linear dynamic link and when the error is a very small value compared to the output signal. A dynamic link is considered linear if it is described by linear differential equations with constant coefficients. For SI, which are essentially nonlinear links, separating static and dynamic errors into separately summable components is unacceptable.

Normalized MX must be invariant to the conditions of use and operating mode of the SI and reflect only its properties.

The choice of MX must be carried out so that the user has
the ability to calculate SI characteristics from them under real operating conditions.

The normalized MX given in the regulatory and technical documentation reflect the properties not of a single instance of an SI, but of the entire set of SI of a given type, i.e. are nominal. A type is understood as a set of measuring instruments that have the same purpose, layout and design and satisfy the same requirements regulated in the technical specifications.

The metrological characteristics of an individual SI of this type can be any within the range of nominal MX values. It follows that the MX of a measuring instrument of this type should be described as a non-stationary random process. A mathematically strict account of this circumstance requires normalization of not only the limits of MX as random variables, but also their time dependence (i.e., autocorrelation functions). This will lead to extremely complex system rationing and the practical impossibility of controlling MX, since in this case it would have to be carried out at strictly defined intervals. As a result, a simplified standardization system was adopted, providing a reasonable compromise between mathematical rigor and the necessary practical simplicity. In the adopted system, low-frequency changes in the random components of the error, the period of which is commensurate with the duration of the verification interval, are not taken into account when normalizing MX. They determine the reliability indicators of measuring instruments, determine the choice of rational calibration intervals and other similar characteristics. High-frequency changes in the random components of the error, the correlation intervals of which are commensurate with the duration of the measurement process, must be taken into account by normalizing, for example, their autocorrelation functions.

2.2 Development of complexes of standardized metrological characteristics

The wide variety of SI groups makes it impossible to regulate specific MX complexes for each of these groups in one regulatory document. At the same time, all SI cannot be characterized by a single set of normalized MX, even if it is presented in the most general form.

The main feature of dividing measuring instruments into groups is the commonality of the complex of standardized MXs necessary to determine the characteristic instrumental components of measurement errors. In this case, it is advisable to divide all measuring instruments into three large groups, presented according to the degree of complexity of MX: 1) measures and digital-to-analog converters; 2) measuring and recording instruments; 3) analog and analog-to-digital measuring converters.

When establishing a set of standardized MX adopted next model instrumental component of measurement error:

where by symbol<< * >> indicates the combination of the SI error in real conditions of its use and the error component int, caused by the interaction of the SI with the measurement object. By combining we mean applying some functionality to the components, which allows us to calculate the error caused by their joint influence. In each case, the functionality is determined based on the properties of a specific SI.

The entire MX population can be divided into two large groups. In the first of them, the instrumental component of the error is determined by statistically combining its individual components. In this case, the confidence interval in which the instrumental error lies is determined with a given confidence probability less than one. For MX of this group, the following error model is adopted in real application conditions (model 1):

where is the systematic component;

Random component;

Random component due to hysteresis;

Combining additional errors;

Dynamic error;

L is the number of additional errors, equal to all quantities that significantly affect the error in real conditions.

Depending on the properties of a given type of SI and the operating conditions of its use, individual components may be missing.

The first model is selected if it is accepted that the error occasionally exceeds the value calculated from the standardized characteristics. In this case, using the MX complex, it is possible to calculate point and interval characteristics in which the instrumental component of the measurement error is found with any given confidence probability, close to unity, but less than it.

For the second MX group, statistical aggregation of components is not applied. Such measuring instruments include laboratory means, as well as most standard means, the use of which does not involve repeated observations with averaging of results. Instrumental error in in this case is defined as the arithmetic sum of the largest possible values ​​of its components. This estimate gives a confidence interval with a probability equal to one, which is the upper limit estimate of the desired error interval, covering all possible, including very rarely realized, values. This leads to a significant tightening of the requirements for MX, which can only be applied to the most critical measurements, for example those related to the health and life of people, with the possibility of catastrophic consequences of incorrect measurements, etc.

Arithmetic summation of the largest possible values ​​of the components of the instrumental error leads to the inclusion in the complex of normalized MX limits of permissible error, and not statistical moments. This is also acceptable for measuring instruments that have no more than three components, each of which is determined by a separate standardized MX. In this case, the calculated estimates of the instrumental error obtained by arithmetic union highest values of its components and statistical summation of the characteristics of the components (with a probability, although less, but quite close to unity), will practically not differ. For the case under consideration, SI error model 2:

Here is the main SI error without breaking it into components (unlike model 1).

3. DEVELOPMENT OF METROLOGICAL MEASURING INSTRUMENTS

3.1 Development of metrological reliability of measuring instruments.

Model 2 is applicable only for those SIs whose random component is negligible.

The issues of choosing MX are regulated in sufficient detail in GOST 8.009-84, which shows the characteristics that should be standardized for the above-mentioned SI groups. The above list can be adjusted for a specific measuring instrument, taking into account its features and operating conditions. It is important to note that one should not normalize those MXs that make an insignificant contribution to the instrumental error compared to others. Determination of whether a given error is important or not is made on the basis of the materiality criteria given in GOST 8.009-84.

During operation, the metrological characteristics and parameters of the measuring instrument undergo changes. These changes are random, monotonous or fluctuating in nature and lead to failures, i.e. to the inability of the SI to perform its functions. Failures are divided into non-metrological and metrological.

Non-metrological is a failure caused by reasons not related to changes in the MX of the measuring instrument. They are mainly of an obvious nature, appear suddenly and can be detected without verification.

Metrological is a failure caused by MX leaving the established permissible limits. As studies have shown, metrological failures occur much more often than non-metrological ones. This necessitates the development of special methods for their prediction and detection. Metrological failures are divided into sudden and gradual.

Sudden failure is a failure characterized by an abrupt change in one or more MXs. These failures, due to their randomness, cannot be predicted. Their consequences (failure of readings, loss of sensitivity, etc.) are easily detected during operation of the device, i.e. by the nature of their manifestation they are obvious. A feature of sudden failures is the constancy of their intensity over time. This makes it possible to apply classical reliability theory to analyze these failures. In this regard, failures of this kind will not be considered further.

Gradual failure is a failure characterized by a monotonous change in one or more MXs. By the nature of their manifestation, gradual failures are hidden and can only be detected based on the results of periodic monitoring of the measuring instruments. In the following, it is these types of failures that will be considered.

The concept of metrological serviceability of a measuring instrument is closely related to the concept of “metrological failure”. It refers to the state of the SI in which all standardized MX correspond to the established requirements. SI's ability to preserve set values metrological characteristics for a given time at certain modes and operating conditions is called metrological reliability. The specificity of the problem of metrological reliability is that for it the main position of the classical reliability theory about the constancy of the failure rate over time turns out to be unlawful. Modern reliability theory is focused on products that have two characteristic states: operational and inoperative. A gradual change in the SI error makes it possible to introduce as many operational states as desired with different levels of operating efficiency, determined by the degree of approximation of the error to the permissible limit values.

The concept of metrological failure is to a certain extent conditional, since it is determined by the MX tolerance, which in general can vary depending on specific conditions. It is also important that it is impossible to record the exact time of occurrence of a metrological failure due to the hidden nature of its manifestation, while obvious failures, which the classical reliability theory deals with, can be detected at the moment of their occurrence. All this required the development of special methods for analyzing the metrological reliability of SI.

The reliability of a measuring instrument characterizes its behavior over time and is a generalized concept that includes stability, reliability, durability, maintainability (for recoverable measuring instruments) and storability.

The stability of SI is a qualitative characteristic reflecting the constancy of its MX over time. It is described by the time dependences of the parameters of the error distribution law. Metrological reliability and stability are different properties of the same SI aging process. Stability carries more information about the constancy of the metrological properties of a measuring instrument. This is, as it were, its “internal” property. Reliability, on the contrary, is an “external” property, since it depends both on stability and on the accuracy of measurements and the values ​​of the tolerances used.

Reliability is the property of an SI to continuously maintain an operational state for some time. It is characterized by two states: operational and inoperative. However, for complex measuring systems there may also be larger number states, since not every failure leads to a complete cessation of their functioning. Failure is a random event associated with disruption or cessation of SI performance. This determines the random nature of failure-free indicators, the main one of which is the distribution of failure-free operation time of the SI.

Durability is the property of an SI to maintain its operational state until a limiting state occurs. An operational state is a state of the SI in which all its MX correspond to the normalized values. A limiting state is a state of SI in which its use is unacceptable.

After a metrological failure, the SI characteristics can be returned to acceptable ranges through appropriate adjustments. The adjustment process can be more or less lengthy depending on the nature of the metrological failure, the design of the measuring instruments and a number of other reasons. Therefore, the concept of “maintainability” was introduced into the reliability characteristic. Maintainability is a property of measuring equipment, which consists in its adaptability to preventing and detecting the causes of failures, restoring and maintaining it working condition through maintenance and repair. It is characterized by the expenditure of time and money to restore the measuring instrument after a metrological failure and maintain it in working condition.

As will be shown below, the process of changing MX is continuous, regardless of whether the SI is in use or is stored in a warehouse. The property of an SI to preserve the values ​​of indicators of reliability, durability and maintainability during and after storage and transportation is called its persistence.

3.2 Changes in metrological characteristics of means

measurements during operation

The metrological characteristics of SI may change during operation. In what follows we will talk about changes in error (t), implying that any other MX can be considered in a similar way instead.

It should be noted that not all error components are subject to change over time. For example, methodological errors depend only on the measurement technique used. Among instrumental errors, there are many components that are practically not subject to aging, for example, the size of the quantum in digital devices and the quantization error determined by it.

The change in MX of measuring instruments over time is due to aging processes in its nodes and elements caused by interaction with the external environment. These processes occur mainly at the molecular level and do not depend on whether the measuring instrument is in operation or stored for conservation. Consequently, the main factor determining the aging of measuring instruments is the calendar time that has passed since their manufacture, i.e. age. The rate of aging depends primarily on the materials and technologies used. Research has shown that irreversible processes that change the error occur very slowly and in most cases it is impossible to record these changes during the experiment. In this regard, various mathematical methods are of great importance, on the basis of which models of error changes are built and metrological failures are predicted.

The problem solved when determining the metrological reliability of measuring instruments is to find the initial changes in MX and construct a mathematical model that extrapolates the results obtained over a large time interval. Since the change in MX over time is a random process, the main construction tool mathematical models is the theory of random processes.

The change in SI error over time is a non-stationary random process. Many of its implementations are shown in Fig. 1 in the form of error modulus curves. At each moment t i they are characterized by a certain probability density distribution law p(, t i) (curves 1 and 2 in Fig. 2a). In the center of the strip (curve cp (t)) the highest density of errors is observed, which gradually decreases towards the boundaries of the strip, theoretically tending to zero at an infinite distance from the center. The upper and lower boundaries of the SI error band can only be presented in the form of some quantile boundaries, within which are contained most of the errors realized with a confidence probability P. Outside the boundaries with probability (1 - P)/2 are the errors that are most distant from the center of implementations.

To apply a quantile description of the boundaries of the error band in each of its sections t i, it is necessary to know the estimates of the mathematical expectation cp (t i) and the standard deviation of individual implementations. The error value at the boundaries in each section t i is equal to

r (t i) = cp (t) ± k(t i),

where k is a quantile factor corresponding to a given confidence probability P, the value of which significantly depends on the type of error distribution law across sections. It is practically impossible to determine the type of this law when studying SR aging processes. This is due to the fact that distribution laws can undergo significant changes over time.

Metrological failure occurs when the straight line curve intersects ± etc. Failures can occur at various times in the range from t min to t max (see Fig. 2, a), and these points are the intersection points of the 5% and 95% quantiles with the permissible error line. When the curve (t) reaches the permissible limit, 5% of devices experience metrological failure. The distribution of the moments of occurrence of such failures will be characterized by the probability density p H (t), shown in Fig. 2, b. Thus, as a model of a nonstationary random process of change in time of the SI error module, it is advisable to use the dependence of the change in time of the 95% quantile of this process.

Indicators of accuracy, metrological reliability and stability of an SI correspond to various functionals built on the trajectories of changes in its MX (t). The accuracy of the SI is characterized by the value MX at the considered moment in time, and for the set of measuring instruments - by the distribution of these values, represented by curve 1 for the initial moment and curve 2 for the moment t i. Metrological reliability is characterized by the distribution of times when metrological failures occur (see Fig. 2,b). SI stability is characterized by the distribution of MX increments over a given time.

3.3 Development of metrological standardization models

characteristics

The MX standardization system is based on the principle of adequacy of the measurement error estimate and its actual value, provided that the estimate actually found is an estimate “from above”. The last condition is explained by the fact that an estimate “from below” is always more dangerous, since it leads to greater damage from unreliable measurement information.

This approach is quite understandable, taking into account that accurate normalization of MX is impossible due to the many influencing factors that are not taken into account (due to their ignorance and the lack of a tool for identifying them). Therefore, rationing, to a certain extent, is an act of will when reaching a compromise between desire full description measurement characteristics and the ability to carry this out in real conditions under known experimental and theoretical limitations and the requirements of simplicity and clarity of engineering methods. In other words, complex methods MX descriptions and normalizations are not viable

The consumer receives information about standard MX from the technical documentation on the SI and only in extremely rare, exceptional cases independently carries out experimental study individual characteristics of SI. Therefore, it is very important to know the relationship between MX SI and instrumental measurement errors. This would allow, knowing one complex MX SI, to directly find the measurement error, eliminating one of the most labor-intensive and complex tasks summing up the components of the total measurement error. However, this is hampered by one more circumstance - the difference between the MX of a particular SI and the metrological properties of many of the same SIs. For example, systematic error of a given SI is a deterministic quantity, and for a set of SI it is a random quantity. The NMX complex must be installed based on the requirements real conditions operation of specific measuring instruments. On this basis, it is advisable to divide all SI into two functional categories. For the first and third groups of SI, the characteristics of interaction with devices connected to the input and output of the SI, and non-informative parameters of the output signal should be normalized. In addition, for the third group the nominal transformation function f nom (x) must be normalized (in the SI of the second group it will be replaced by a scale or other calibrated reading device) and full dynamic characteristics. The indicated characteristics for SI of the second group do not make sense, with the exception of recording instruments for which it is advisable to normalize complete or partial dynamic characteristics

The most common forms of recording the CSI accuracy class are:

where c and d are constant coefficients according to formula (3.6); x k - final value measuring range; x - current value;

where b= d; a = c-b;

3) symbolic notation, characteristic of foreign CRCs,

op = ± ,

GOST 8.009 - 84 provides two main models (Ml and MP) for the formation of NMX complexes, corresponding to two models of the occurrence of SI errors, based on the statistical combination of these errors.

The model is applicable for SI, the random component of the error of which can be neglected. This model includes the calculation of the largest possible values ​​of the components of the SI error to guarantee the probability P = 1 of preventing the SI error from going beyond the calculated limits. Model II is used for the most critical measurements related to taking into account technical and economic factors, possible catastrophic consequences, threats to human health, etc. When the number of components exceeds three, this model gives a rougher (due to the inclusion of rarely occurring components), but reliable estimate “from above” of the main SI error.

Model 1 gives a rational estimate of the main SI error with probability P<1 из-за пренебрежения редко реализующимися составляющими погрешности.

Thus, the NMX complex for error models I and II provides for the statistical integration of individual error components, taking into account their significance.

However, for some SIs such statistical unification is impractical. These are precise laboratory industrial (in technological processes) measuring instruments that measure slowly changing processes under conditions close to normal, exemplary measuring instruments, when used, repeated observations with averaging are not performed. In such instruments, their main error or the arithmetic sum of the largest possible values ​​of the individual error components can be taken as instrumental (model III).

Arithmetic summation of the largest values ​​of error components is possible if the number of such components is no more than three. In this case, the assessment of the total instrumental error will practically not differ from the statistical summation.

4. CLASSIFICATION OF SIGNALS

A signal is a material carrier of information that represents a certain physical process, one of the parameters of which is functionally related to the physical quantity being measured. This parameter is called informative.

A measuring signal is a signal containing quantitative information about the physical quantity being measured. Basic concepts, terms and definitions in the field of measuring signals are established by GOST 16465 70 "Radio signals. Terms and definitions". The measuring signals are extremely varied. Their classification according to various criteria is shown in Fig. 3.

Based on the nature of measuring informative and time parameters, measuring signals are divided into analog, discrete and digital.

An analog signal is a signal described by a continuous or piecewise continuous function Y a (t), and both this function itself and its argument t can take on any values ​​at given intervals Y<=(Y min ; Y max) и t6(t mjn ; t max)

A discrete signal is a signal that varies discretely in time or in level. In the first case, it can take nT at discrete moments in time, where T = const - sampling interval (period), n = 0; 1; 2;. integer, any values ​​Y JI (nT)e(Y min ; Y max), called samples, or samples. Such signals are described by lattice functions. In the second case, the values ​​of the signal Y a (t) exist at any time te(t niin ; t max), but they can take on a limited range of values ​​h ; =nq, multiples of the quantum q.

Digital signals are level-quantized and time-discrete signals Y u (nT), which are described by quantized lattice functions (quantized sequences), which at discrete moments of time PT accept only a finite series of discrete values ​​of quantization levels h 1, h 2,., h n

According to the nature of changes over time, signals are divided into constants, whose values ​​do not change over time, and variables, whose values ​​change over time. Constant signals are the simplest type of measuring signals.

Variable signals can be continuous in time or pulsed. A signal whose parameters change continuously is called continuous. A pulse signal is a signal of finite energy, significantly different from zero during a limited time interval commensurate with the time of completion of the transient process in the system on which this signal is intended to influence.

According to the degree of availability of a priori information, variable measuring signals are divided into deterministic, quasi-deterministic and random. A deterministic signal is a signal whose law of change is known, and the model does not contain unknown parameters. The instantaneous values ​​of a deterministic signal are known at any time. The signals at the output of the measures are deterministic (with a certain degree of accuracy). For example, the output signal of a low-frequency sine wave generator is characterized by the amplitude and frequency values ​​​​that are set on its controls. The errors in setting these parameters are determined by the metrological characteristics of the generator.

Quasi-deterministic signals are signals with a partially known nature of change over time, i.e. with one or more unknown parameters. They are most interesting from a metrological point of view. The vast majority of measurement signals are quasi-deterministic.

Deterministic and quasi-deterministic signals are divided into elementary, described by simple mathematical formulas, and complex. Elementary signals include constant and harmonic signals, as well as signals described by the unit and delta functions.

Signals can be periodic or non-periodic. Non-periodic signals are divided into almost periodic and transient. Nearly periodic is a signal whose values ​​are approximately repeated when a properly selected number of almost period is added to the time argument. A periodic signal is a special case of such signals. Almost periodic functions are obtained by adding periodic functions with incommensurable periods, for example Y(t) sin(cot) - sin(V2(0t). Transient signals describe transient processes in physical systems.

A signal is called periodic, the instantaneous values ​​of which are repeated at a constant time interval. The period T of the signal is a parameter equal to the smallest such time interval. The frequency f of a periodic signal is the reciprocal of the period.

A periodic signal is characterized by a spectrum. There are three types of spectrum:

* complex complex function of a discrete argument that is a multiple of an integer number of frequency values ​​f of a periodic signal Y(t)

* amplitude - a function of a discrete argument, which is the module of the complex spectrum of a periodic signal

* phase - a function of a discrete argument, which is an argument of the complex spectrum of a periodic signal

A measuring system, by definition, is designed to perceive, process and store measurement information in the general case of heterogeneous physical quantities through various measuring channels (IC). Therefore, calculating the error of a measuring system comes down to estimating the errors of its individual channels.

The resulting relative error of the IR will be equal to

where x is the current value of the measured value;

x P - the limit of a given channel measurement range at which the relative error is minimal;

Relative errors calculated at the beginning and end of the range, respectively.

IR - a chain of various perceiving, converting and recording links

5. Channel development

5.1Development of a channel model

In real data transmission channels, the signal is affected by complex interference and it is almost impossible to give a mathematical description of the received signal. Therefore, when studying signal transmission through channels, idealized models of these channels are used. A data transmission channel model is understood as a description of a channel that allows one to calculate or evaluate its characteristics, on the basis of which one can explore various ways of constructing a communication system without direct experimental data.

The model of a continuous channel is the so-called Gaussian channel. The noise in it is additive and represents an ergodic normal process with zero mathematical expectation. The Gaussian channel reflects quite well only the channel with fluctuation noise. For multiplicative interference, a channel model with a Rayleigh distribution is used. For impulse noise, a channel with a hyperbolic distribution is used.

The discrete channel model coincides with the models of error sources.

A number of mathematical models of error distribution in real communication channels have been put forward, such as Hilbert, Mertz, Maldenbrot, etc.

5.2 Development of a measuring channel model

Previously, measuring equipment was designed and manufactured mainly in the form of separate instruments designed to measure one or several physical quantities. Currently, conducting scientific experiments, automation of complex production processes, control, and diagnostics are unthinkable without the use of measurement information systems (MIS) of various purposes, which make it possible to automatically obtain the necessary information directly from the object being studied, process it and issue it in the required form. Specialized measuring systems are being developed for almost all areas of science and technology.

When designing an IIS according to given technical and operational characteristics, a task arises related to the choice of a rational structure and a set of technical means for its construction. The structure of the information system is mainly determined by the measurement method on which it is based, and the number and type of technical means by the information process occurring in the system. An assessment of the nature of the information process and types of information transformation can be made based on an analysis of the information system information model, but its construction is a rather labor-intensive process, and the model itself is so complex that it makes it difficult to solve the problem.

Due to the fact that in the third generation IMS information processing is carried out mainly by universal computers, which are a structural component of the IMS, and when designing IMS they are selected from a limited number of serial computers, the information model of the IMS can be simplified by reducing it to a model of a measuring channel (MC). ). All measuring channels of the IIS, which include elements of information processes, from receiving information from the object of study or control to its display or processing and storing, contain a certain limited number of types

transformation of information. By combining all types of information conversion in one measuring channel and isolating the latter from the IMS, and also keeping in mind that analog signals are always active at the input of the measuring system, we obtain two models of measuring channels with direct (Fig. 4a) and reverse (Fig. 4b) ) transformations of measurement information.

On the models, in nodes 0 - 4, information is converted. The arrows indicate the direction of information flows, and their letter designations indicate the type of transformation.

Node 0 is the output of the research or control object, on which analog information A is generated, which determines the state of the object. Information A arrives at node 1, where it is converted to the form A n for further transformations in the system. In node 1, conversions of a non-electrical information carrier into an electrical one, amplification, scaling, linearization, etc. can be carried out, i.e., normalization of the parameters of the information carrier A.

In node 2, the normalized information carrier A„ for transmission over the communication line is modulated and provided in the form of an analog A n or discrete D m signal.

Analog information A n in node 3 is demodulated and sent to node 4, where it is measured and displayed.

Fig.4 Model of the measuring channel of direct (a) and reverse (b) transformations of measuring information

Discrete information in node Z 1 is either converted into analog information A n and enters node 4 1, or after digital conversion it is sent to a digital information display device or to a device for processing it.

In some ICs, the normalized information carrier A from node 1 immediately goes to node 4 1 for measurement and display. In other ICs, analog information A, without a normalization operation, immediately enters node 2, where it is sampled.

Thus, the information model (Fig. 4a) has six branches through which information flows are transmitted: analog 0-1-2-3 1 -4 1 and 0-1-4 1 and analog-discrete 0-1-2-3 2 -4 1 , 0-1-2-3 2 -4 2 and 0-2-З 2 -4 1 , 0-2-3 2 -4 2 . Branch 0-l-4 1 is not used when constructing measuring channels of the IMS, but is used only in autonomous measuring instruments and therefore is not shown in Fig. 4a.

The model shown in Fig. 4 b differs from the model in Fig. 4 a only in the presence of branches 3 2 -1"-0, 3 1 -1"-0, 3 2 -1"-1 and 3 1 -1"- 1, through which reverse transmission* of the analog information carrier A n is carried out. In node 1, the output discrete information carrier A l is converted into a homogeneous signal with the input information carrier A or the normalized information carrier A n signal A. Compensation can be made both according to A and A n.

Analysis of the information models of the measuring channels of the IMS showed that when constructing them based on the direct conversion method, only five variants of structures are possible, and when using measurement methods with inverse (compensatory) information conversion 20.

In most cases (especially when constructing an IIS for remote objects), the generalized information model of the IC IIS has the form shown in Fig. 4a. The analogue-discrete branches 0-1-2-3 2 -4 2 and 0-2-3 2 are most widespread. -4 2 . As can be seen, for the indicated branches the number of levels of information conversion into IC does not exceed three.

Since the nodes contain technical means that transform information, taking into account the limited number of transformation levels, they can be combined into three groups. This will allow, when developing an IC IIS, to select the necessary technical means to implement a particular structure. The group of technical means of node 1 includes the entire set of primary measuring transducers, as well as unifying (normalizing) measuring transducers (UMTs) that perform scaling, linearization, power conversion, etc.; blocks of test formation and exemplary measures.

In node 2, if there are analog-discrete branches, there is another group of measuring instruments: analog-to-digital converters (ADC), switches (CM), which serve to connect the corresponding source of information to the IR or processing device, as well as communication channels (CC).

The third group (node ​​3) combines code converters (PCs), digital-to-analog converters (DACs) and delay lines (DLs).

The given IC structure, which implements the direct measurement method, is shown without the switching element and ADC connections that control the operation. It is standard, and most multi-channel IMS are built on its basis, especially long-range IMS.

Of interest are the methods for calculating IC for the various information models discussed above. A strict mathematical calculation is impossible, but using simplified methods of approach to determining the components of the resulting error, parameters and distribution laws, specifying the value of the confidence probability and taking into account the correlations between them, it is possible to create and calculate a simplified mathematical model of a real measuring channel. Examples of calculating the error of channels with analog and digital recorders are considered in the works of P.V. Novitsky.

LITERATURE

1. V. M. Pestrikov Home electrician and more... Ed. Nit. - 4th edition

2. A.G. Sergeev, V.V. Krokhin. Metrology, uch. manual, Moscow, Logos, 2000

3. Goryacheva G. A., Dobromyslov E. R. Capacitors: Handbook. - M.: Radio and communication, 1984

4. Rannev G. G. Methods and measuring instruments: M.: Publishing center "Academy", 2003

5. http//www.biolock.ru

6. Kalashnikov V.I., Nefedov S.V., Putilin A.B. Information-measuring equipment and technologies: textbook. for universities. - M.: Higher. school, 2002

Similar documents

    Description of the operating principle of an analog sensor and selection of its model. Selection and calculation of an operational amplifier. Operating principle and choice of analog-to-digital converter microcircuit. Development of the program algorithm. Description and implementation of the output interface.

    course work, added 02/04/2014

    Preparing an analog signal for digital processing. Calculation of the spectral density of an analog signal. Specifics of digital filter synthesis based on a given analog prototype filter. Calculation and construction of time characteristics of an analog filter.

    course work, added 11/02/2011

    Calculation of filter characteristics in time and frequency domains using fast Fourier transform, output signal in time and frequency domains using inverse fast Fourier transform; determination of the power of the filter's own noise.

    course work, added 10/28/2011

    Development of an analog-to-digital converter adapter and an active low-pass filter. Sampling, quantization, coding as signal conversion processes for the microprocessor section. Algorithm of operation of the device and its electrical circuit.

    abstract, added 01/29/2011

    4:2:2 digital stream parameters. Development of a circuit diagram. Digital-to-analog converter, low pass filter, analog signal amplifier, output stage, PAL encoder. Development of PCB topology.

    thesis, added 10/19/2015

    An algorithm for calculating a filter in the time and frequency domains using the discrete fast Fourier transform (FFT) and the inverse fast Fourier transform (IFFT). Calculation of the output signal and the inherent noise power of the synthesized filter.

    course work, added 12/26/2011

    Classification of filters according to the type of their amplitude-frequency characteristics. Development of schematic diagrams of functional units. Calculation of an electromagnetic filter for separating electron beams. Determination of the active resistance of the rectifier and diode phase.

    course work, added 12/11/2012

    Development of block diagrams of transmitting and receiving devices of a multi-channel information transmission system with PCM; calculation of basic time and frequency parameters. Project of a pulse amplitude modulator for converting an analog signal into an AIM signal.

    course work, added 07/20/2014

    Typical block diagram of an electronic device and its operation. Properties of a frequency filter, its characteristics. Calculation of the input voltage converter. Design and principle of operation of a relay element. Calculation of an analog time delay element.

    course work, added 12/14/2014

    Consideration of the design of a rheostatic measuring transducer and the principle of its operation. Studying the block diagram of converting an analog signal from a measuring controller into digital form. Study of the operating principle of parallel ADC.

Significant difficulties arise when reducing the random error when measuring a time-varying quantity. In this case, to obtain the best estimate of the measured value, a filtering procedure is used. Depending on the type of transformations used, linear and nonlinear filtering are distinguished, where the implementation of individual procedures can be carried out both in hardware and software.

Filtering can be used not only to suppress interference induced on the input circuits of analog signal transmission, but, if necessary, to limit the spectrum of the input signal and restore the spectrum of the output signal (this has already been discussed earlier). If necessary, filters with a tunable cutoff frequency can be used.

The use of automatic correction of systematic errors can be considered as adaptation of the channel to its own state. The use of modern element base makes it possible today to implement input circuits that adapt to the characteristics of the input signal, in particular, to its dynamic range. For such adaptation, an input amplifier with controlled gain is required. If, based on the results of previous measurements, it was possible to establish that the dynamic range of the signal is small compared to the range of the ADC input signal, then the amplifier gain is increased until the dynamic range of the signal corresponds to the operating range of the ADC. In this way, it is possible to minimize the signal sampling error and, consequently, increase the accuracy of measurements. The change in the signal gain at the input is taken into account in software when processing the measurement results by a digital controller.

The criteria for assessing the correspondence between the dynamic range of the signal and the operating range of the ADC will be discussed further; methods for adapting the input channel to the frequency properties of the input signal will also be considered.

2.4. Sample-and-hold devices

When collecting information and its subsequent conversion, it is often necessary to fix the value of an analog signal for a certain period of time. For this purpose, sampling and storage devices (SSDs) are used. Another name for such devices is analog storage devices (AMD). Their work is carried out in two modes. In the sampling (tracking) mode, they must repeat the input analog signal at their output, and in the storage mode, they must store and output to their output the last input voltage preceding the moment the device switches to this mode.

In the simplest case, when constructing a UVH, to carry out these operations we only need a capacitor WITH XP and key S(Fig. 2.12. A). When the switch is closed, the voltage on the capacitor and at the output of the UVH will repeat the input. When the key is opened, the voltage on the capacitor, the value of which will be equal to the input voltage at the moment the key is opened, will be stored on it and transmitted to the output of the UVH.

https://pandia.ru/text/78/077/images/image030_18.jpg" width="457" height="428 src=">

Rice. 2.12. Functional diagram of UVH ( A) and time diagrams of its operation ( b)

Obviously, in practical implementation, the voltage level on the capacitor in storage mode will not remain constant (Fig. 2.12. b) due to its discharge by current to the load and discharge due to its own leakage currents. In order for the capacitor voltage to remain at an acceptable level for as long as possible at the output of the UVH, a repeater is installed on the op-amp ( D.A. 1 in Fig. 2.12. A). As you know, a repeater has a high input impedance. This “decouples” the capacitor circuit and the load circuit in resistance and significantly reduces the discharge of the capacitor through the load. To reduce your own leakage currents, you need to choose a capacitor with a high-quality dielectric. And of course, in order for the voltage on the capacitor to remain constant for as long as possible, it is necessary to take as large a capacitance as possible.

When transferring the UVH from storage mode to tracking mode, the voltage on the capacitor will not immediately reach the current input voltage level (Fig. 2.12. b). The time it takes for this to happen will be determined by the time it takes for the capacitor to charge - this time is called the acquisition time or sampling time. The capacitor will charge the faster, the greater its charge current. In order for this current not to be limited by the output resistance of the previous stage, a repeater is also installed at the input of the UVH at the op-amp ( D.A. 2 in Fig. 2.12. A). In this case, the property that the repeater has a low output impedance is used. The capacitor will charge the faster the smaller its capacity. Thus, the conditions for choosing the value of the capacitor capacitance for optimal operation of the UVH in different modes are contradictory - the capacitance of the capacitor must be selected each time based on the specific requirements for the duration of its operating modes.

The input follower drives the capacitive load. Therefore, to build it, operational amplifiers are used that are stable at unity gain and a large capacitive load.

When using UVH in an ADC, the storage time, as a rule, is not much longer than the conversion time of the ADC. In this case, the capacitor value is selected in such a way as to obtain the best capture time, provided that the voltage drop during one conversion does not exceed the value of the least significant bit of the ADC.

Since dielectric losses in a storage capacitor are one of the sources of errors, it is best to choose capacitors with a dielectric made of polypropylene, polystyrene and Teflon. Mica and polycarbonate capacitors already have very mediocre characteristics. And you should not use ceramic capacitors at all.

The accuracy characteristics of the UVH include the zero offset voltage, which usually does not exceed 5 mV (if an op-amp with bipolar transistors at the input is used; op-amps with field-effect transistors at the input have a more significant zero offset) and the drift of the fixed voltage for a given storage capacitor capacity (for different UVH from 10-3 to 10-1 V/s is normalized at capacity WITH XP = 1,000 pF). The amount of drift can be reduced by increasing the capacitance WITH HR. However, this degrades the dynamic characteristics of the circuit.

The dynamic characteristics of the UVH include: sampling time, which shows how long, under the most unfavorable conditions, the process of charging a storage capacitor with a given tolerance level lasts; and aperture delay - the period between the moment the control voltage is removed and the actual locking of the key.

There are many sample-and-hold integrated circuits with good performance. A number of circuits include an internal storage capacitor and guarantee maximum sampling times of tens or hundreds of nanoseconds with an accuracy of 0.01% for a 10 V signal. The aperture delay value for popular UVHs does not exceed 100 ns. If higher performance is required, hybrid and modular UVHs can be used.

As an example of the practical construction of the UVH in Fig. Figure 2.13 shows the functional diagram of the LSI K1100SK2 (LF398). The circuit has a general negative feedback covering the entire circuit - from the follower output to the op-amp D.A. 2 to the repeater input on the amplifier D.A. 1.

Dating" href="/text/category/datirovaniye/" rel="bookmark">dating the ADC reading when measuring a variable signal, in multi-channel measuring systems for simultaneously taking data from various sensors, eliminating high-frequency emissions in the DAC output signal when changing the code. These and other applications of UVC will be discussed in more detail in further material.

3. DIGITAL TO ANALOG CONVERTERS

3.1 General implementation methods

Digital-to-analog converters (DACs) are devices used to convert digital code into an analog signal proportional in magnitude to the code value.

DACs are widely used for connecting digital control systems with actuators and mechanisms that are controlled by the level of an analog signal, as components more complex analog-to-digital devices and converters.

In practice, DACs are mainly used for converting binary codes, so further discussion will only be about such DACs.

Any DAC is characterized, first of all, by its conversion function, which connects a change in the input value (digital code) with a change in the output value (voltage or current) Fig. 3.1.

Rice. 3.1. Conversion function (transfer characteristic) of DAC

Analytically, the DAC conversion function can be expressed as follows (for the case when the output signal is represented by voltage):

U OUT = ( U MAX / N MAX) N VX, where

U OUT – output voltage value corresponding to the digital code N VX supplied to the DAC inputs.

U MAX – maximum output voltage corresponding to the maximum code applied to the inputs N MAX.

Size TO DAC defined by the ratio U MAX/ N MAX is called the digital-to-analog conversion ratio. Its constancy for the entire range of changes in the arguments determines the proportionality of changes in the value of the output analog signal to the corresponding changes in the value of the input code. That is why, despite the stepwise nature of the characteristic associated with a discrete change in the input value (digital code), it is believed that DACs are linear converters.

If the value N The VX can be represented through the values ​​of the weights of its bits, the DAC conversion function can be expressed as follows:

U OUT = DAC, where

i– digit number of the input code N VX;

A i – value i th digit (zero or one);

U i – weight i-th category;

n– number of bits of the input code (number of bits of the DAC).

This method of recording the conversion function largely reflects the operating principle of most DACs, which essentially consists of summing the shares of an analog output value (summing analog measures), each of which is proportional to the weight of the corresponding digit.

In general, according to the construction method, DACs are distinguished with a weighted summation of currents, with a weighted summation of voltages, and based on a code-controlled voltage divider.

When constructing a DAC based on a weighted summation of currents in accordance with the values ​​of the bits of the input code N The VX signals from the current generators are summed and the output signal is represented by current. The construction of a four-bit DAC using this principle is illustrated in Fig. 3.2. The values ​​of the generator currents are selected proportional to the weights of the discharges binary code, i.e. if the current value of the smallest current generator corresponding to the least significant bit of the input code is equal to I, then the value of each next one must be twice as large as the previous one - 2 I, 4I, 8I. Every i th digit of the input code N VX controls i-th key S i. If i th rank equal to one, then the corresponding switch is closed and then the current of the generator, whose current value is proportional to the weight of this i th category, participates in the formation of the output current of the converter. Thus, it turns out that the output current is IN VH.

Rice. 3.2. Construction of a DAC based on weighted summation of currents

N S 1, S 2 and S 4 in the diagram in Fig. 3.2 will be closed, and the key S 3 – open. Thus, currents equal to I, 2I and 8 I. In total they will form the output current IEXIT = 11I, i.e. the value of the output current I N VX = 11.

When constructing a DAC based on a weighted summation of voltages in accordance with the values ​​of the bits of the input code N The I/O output signal of the DAC is formed from the values ​​of the voltage generators and is represented by voltage. The construction of a four-bit DAC using this principle is illustrated in Fig. 3.3. The values ​​of the voltage generators are set in accordance with the binary distribution law - proportional to the weights of the bits of the binary code ( E, 2E, 4E and 8 E). If i th digit of the input code N BX is equal to one, then the corresponding switch must be open, and a voltage generator whose voltage value is proportional to the weight of this i-th category, participates in the formation of the output voltage U converter OUT. Thus, it turns out that the output voltage is U DAC OUTPUT is proportional to input code size N VH.

Rice. 3.3. Construction of a DAC based on weighted summation of voltages

For example, if the input code value N BX is equal to eleven, i.e. in binary form it is represented as (1011), then the keys controlled by the corresponding bits S 1, S 2 and S 4 in the diagram in Fig. 3.3 will be open, and the key S 3 – closed. Thus, voltages equal to E, 2E and 8 E. In total they will form the output voltage U OUT = 11 I, i.e. the value of the output voltage U OUT will be proportional to the value of the input code N VX = 11.

In the latter case, the DAC is implemented as a code-controlled voltage divider (Fig. 3.4).

Rice. 3.4. Construction of a DAC based on a code-controlled voltage divider

The code-controlled divider consists of two arms. If the bit width of the implemented DAC is equal to n, then the number of resistors in each arm is 2 n. The resistance of each arm of the divider is changed using keys S. The keys are controlled by the output unitary code of the decoder DC, and the keys of one arm are controlled directly by it, while the others are controlled through inverters. The output code of the decoder contains a number of units equal to the value of the input code N VH. It is not difficult to understand that the division coefficient of the divider will always be proportional to the value of the input code N VH.

Two latest methods have not found widespread use due to practical difficulties in their implementation. For a DAC structure with weighted summation of voltages, it is impossible to implement voltage generators that would allow the mode short circuit at the output, as well as switches that do not have residual voltages in the closed state. In a DAC structure based on a code-controlled divider, each of the two divider arms consists of very large number resistors (2 n), includes the same number of keys for managing them and a large decoder. Therefore, with this approach, the implementation of the DAC turns out to be very cumbersome. Thus, the main structure used in practice is the current-weighted summation DAC structure.

3.2 DAC with weighted current summation

Let's consider the construction of a simple DAC with weighted summation of currents. In the simplest case, such a DAC consists of a resistive matrix and a set of switches (Fig. 3.5).

Rice. 3.5. Resistive matrix DAC implementations

The number of keys and the number of resistors in the matrix is ​​equal to the number of bits n input code N VH. Resistor values ​​are chosen proportional to the weights of the binary code, i.e. proportional to the values ​​of the series 2i,i = 1… n. When a voltage source is connected to a common node of the matrix and the keys are closed, current will flow through each resistor. The current values ​​of the resistors, thanks to the appropriate choice of their values, will be distributed according to the binary law, i.e., proportional to the weights of the bits of the binary code. When submitting an entry code N VX keys are switched on in accordance with the value of the corresponding bits of the input code. The key is closed if the corresponding bit is equal to one. In this case, in the current node, currents are summed up, proportional to the weights of these bits, and the magnitude of the current flowing from the node as a whole will be proportional to the value of the input code N VH.

In such a structure there are two output nodes. Depending on the value of the bits of the input code, the corresponding keys are connected to the node connected to the output of the device, or to another node, which is most often grounded. In this case, current flows constantly through each resistor of the matrix, regardless of the position of the switch, and the amount of current consumed from the reference voltage source is constant.

Rice. 3.6. Implementations of a DAC based on a resistive matrix and with switches

A common disadvantage of both structures considered is the large ratio between the smallest and largest values ​​of the matrix resistors. At the same time, despite big difference resistor ratings, it is necessary to ensure the same absolute fitting error for both the largest and smallest resistor ratings. That is, the relative accuracy of fitting large resistors should be very high. In an integrated DAC design with a number of bits of more than ten, this is quite difficult to achieve.

Structures based on resistive materials are free from all these disadvantages. R- 2R matrices (Fig. 3.7).

Rice. 3.7. DAC based implementations R-2R resistive matrix

and with switch keys

You can verify that with this construction of the resistive matrix, the current in each subsequent parallel branch is two times less than in the previous one, i.e. their values ​​are distributed according to a binary law. The presence in the matrix of only two resistor values, differing by a factor of two, makes it possible to quite simply adjust their values, without making high demands on the relative accuracy of the adjustment.

3.3 DAC parameters and errors

The system of electrical characteristics of DACs, reflecting the features of their construction and operation, combines more than a dozen parameters. Below are the main ones, recommended for inclusion in the regulatory and technical documentation as the most common and most fully describing the operation of the converter in static and dynamic modes.

1. Number of bits – number of bits of the input code.

2. Conversion coefficient - the ratio of the output signal increment to the input signal increment for linear function transformations.

3. Time to establish the output voltage or current - time interval from the moment given change code at the DAC input until the moment at which the output voltage or current finally enters a zone with a width equal to the weight of the least significant bit (LSB), symmetrically located relative to the steady-state value. In Fig. Figure 3.8 shows the transition function of the DAC, showing the change in the DAC output signal over time when the code changes. In addition to the settling time, it characterizes some other dynamic parameters of the DAC - the amount of overshoot of the output signal, the degree of damping, the angular frequency of the settling process, etc. When determining the characteristics of a particular DAC, this characteristic is removed when changing the code from zero value by a code equal to half its maximum value.

4. Maximum frequency conversion – the highest sampling frequency at which given parameters comply with established standards.

There are other parameters that characterize the performance of the DAC and the features of its functioning. These include: input voltage low and high level, output leakage current, consumption current, output voltage or current range, influence factor of instability of power supplies and others.

The most important parameters for a DAC are those that determine its accuracy characteristics, which are determined by errors normalized by magnitude.

Rice. 3.8. Determining the settling time of the DAC output signal

First of all, it is necessary to clearly distinguish static and dynamic errors DAC. Static errors are the errors that remain after the completion of all transient processes associated with changing the input code. Dynamic errors are determined by transient processes at the output of the DAC or its component components that arise as a result of a change in the input code.

The main types of static DAC errors are defined as follows.

Absolute conversion error at scale end point– deviation of the output voltage (current) value from the nominal value corresponding to the end point of the conversion function scale. For DACs operating with external source reference voltage is determined without taking into account the error introduced by this source. Measured in units of the least significant digit of the conversion.

Zero offset voltage at the output – the voltage at the output of the DAC with a zero input code. Measured in low order units. Determines the parallel shift of the actual transformation function and does not introduce nonlinearity. This is an additive error.

Conversion factor error(scale) – multiplicative error associated with the deviation of the slope of the transformation function from the required one.

DAC non-linearity– deviation of the actual transformation function from the specified straight line. The main requirement for a DAC from this point of view is the mandatory monotonicity of the characteristic, which determines the unambiguous correspondence between the output and input signal of the converter. Formally, the monotonicity requirement consists in the constancy of the characteristic sign of the derivative throughout the entire working area.

Nonlinearity errors are generally divided into two types - integral and differential.

Integral nonlinearity error– maximum deviation of the actual characteristic from the ideal one. In fact, this considers the averaged transformation function. This error is determined as a percentage of the final range of the output value. Integral nonlinearity arises due to various nonlinear effects that affect the operation of the converter as a whole. They are most clearly manifested in the integrated design of converters. For example, it may be associated with different heating levels in the LSI of some nonlinear resistances for different input codes.

Differential nonlinearity error– deviation of the actual characteristic from the ideal one for adjacent code values. These errors reflect non-monotonic deviations of the actual characteristics from the ideal ones. To characterize the entire transformation function, the local differential nonlinearity with the maximum absolute value is selected. The limits of permissible values ​​of differential nonlinearity are expressed in units of the weight of the least significant digit.

Let's consider the reasons for the appearance of differential errors and how they affect the DAC conversion function. Let's imagine that all the weights of the bits in the DAC are set perfectly accurately, except for the weight of the most significant bit.

If we consider the sequence of all code combinations for a binary code of a certain bit depth, then the patterns of binary code formation determine, among other things, that in code combinations corresponding to values ​​from zero to half the full scale (from zero to half the maximum code value), the most significant bit is always is equal to zero, and in code combinations corresponding to values ​​from half the scale to its full value, the most significant digit is always equal to one. Therefore, when applying codes corresponding to the first half of the input code value scale to the DAC, the weight of the most significant digit does not participate in the formation of the output signal, and when applying codes corresponding to the second half, it is constantly involved. But if the weight of this digit is specified with an error, then this error will also be reflected in the formation of the output signal. Then this will be reflected in the DAC conversion function, as shown in Fig. 3.9. A.

Rice. 3.9. Influence of reference error on the DAC conversion function

weights of the senior category.

From Fig. 3.9. A. it can be seen that for the first half of the input code values, the real DAC conversion function corresponds to the ideal one, and for the second half of the input code values, the real conversion function differs from the ideal one by the amount of error in setting the weight of the most significant bit. Minimizing the influence of this error on the DAC conversion function can be achieved by choosing a conversion scale factor that will reduce the error at the end point of the conversion scale to zero (Fig. 3.9. b). It is clear that the differential errors are distributed symmetrically relative to the middle of the scale. This determined another name for them - symmetrical type errors. At the same time, it is clear that the presence of such an error determines the non-monotonic behavior of the DAC conversion function.

In Fig. 3.10. A. It is shown how the real DAC conversion function will differ from the ideal one, provided that there are no errors in setting the weights of all digits except the digit preceding the most significant one. Rice. 3.10. b. shows the behavior of the transformation function if the scale component of the total error is selected (reduced to zero).

Metrology" href="/text/category/metrologiya/" rel="bookmark">it is rational to achieve metrological indicators in a comprehensive manner, using technological techniques with various structural methods. And when using ready-made integrated converters, structural methods are the only way to further improve the metrological characteristics of the conversion system .

Zero offset error and scale error are easily corrected at the DAC output. To do this, a constant offset is introduced into the output signal, compensating for the offset of the converter characteristic. The required conversion scale is established either by adjusting the gain set at the output of the amplifier converter, or by adjusting the value of the reference voltage if the DAC is a multiplying one.

With a sequential increase in the values ​​of the input digital signal D(t) from 0 to 2N-1 through the least significant unit (EMP), the output signal U out (t) forms a stepped curve. This dependence is usually called the DAC conversion characteristic. In the absence of hardware errors, the midpoints of the steps are located on the ideal straight line 1 (Fig. 22), which corresponds to the ideal transformation characteristic. Real Characteristics transformation may differ significantly from the ideal size and shape of the steps, as well as location on the coordinate plane. To quantify these differences, there is whole line parameters.

Static parameters

Resolution- increment Uout when converting adjacent values ​​Dj, i.e. different on the EMR. This increment is the quantization step. For binary conversion codes, the nominal value of the quantization step is h=U psh /(2N-1), where U psh is the nominal maximum output voltage of the DAC (full scale voltage), N is the bit capacity of the DAC. The higher the bit depth of the converter, the higher its resolution.

Full scale error- the relative difference between the real and ideal values ​​of the conversion scale limit in the absence of a zero offset.

It is the multiplicative component of the total error. Sometimes indicated by the corresponding EMP number.

Zero offset error- the value of Uout when the DAC input code is zero. It is an additive component of the total error. Typically stated in millivolts or as a percentage of full scale:

Nonlinearity- maximum deviation of the actual conversion characteristic U out (D) from the optimal one (line 2 in Fig. 22). Optimal performance is found empirically in such a way as to minimize the value of the nonlinearity error. Nonlinearity is usually defined in relative units, but in the reference data it is also given in the EMP. For the characteristics shown in Fig. 22.

Differential nonlinearity - maximum change(taking into account the sign) of the deviation of the actual transformation characteristic Uout(D) from the optimal one when moving from one input code value to another adjacent value. Usually defined in relative units or in EMP. For the characteristics shown in Fig. 22,

The monotonicity of the conversion characteristic is an increase (decrease) in the DAC output voltage Uout with an increase (decrease) in the input code D. If the differential nonlinearity is greater than the relative quantization step h/Upsh, then the converter characteristic is non-monotonic.

The temperature instability of the DA converter is characterized by temperature coefficients full scale errors and zero offset errors.

Full scale and zero offset errors can be corrected by calibration (tuning). Nonlinearity errors cannot be eliminated by simple means.

Dynamic parameters

The dynamic parameters of the DAC are determined by the change in the output signal when abrupt change input code, usually from the value “all zeros” to “all ones” (Fig. 23).

Settling time- time interval from the moment the input code changes (in Fig. 23 t=0) until the last time the equality is satisfied

|U out - U psh |= d/2,

Slew rate- maximum rate of change of Uout(t) during the transient process. It is defined as the ratio of the increment D Uout to the time t during which this increment occurred. Usually specified in the technical specifications of a DAC with a voltage output signal. For a DAC with a current output, this parameter largely depends on the type of output op-amp.

For multiplying DACs with voltage output, the unity gain frequency and power bandwidth are often specified, which are mainly determined by the properties of the output amplifier.

DAC noise

Noise at the DAC output may appear due to various reasons caused by physical processes, occurring in semiconductor devices. To assess the quality of a high-resolution DAC, it is customary to use the concept of root mean square noise. They are usually measured in nV/(Hz) 1/2 in a given frequency band.

Surges (pulse noise) are sharp short spikes or dips in the output voltage that occur during a change in output code values ​​due to the non-synchronism of opening and closing analog switches in different bits of the DAC. For example, if, when moving from code value 011...111 to value 100...000, the key of the most significant digit of the summation DA converter weight currents opens later than the low-order keys close, then a signal corresponding to code 000...000 will exist at the DAC output for some time.

Overshoot is typical for high-speed DACs, where capacitances that could smooth them out are minimized. A radical way to suppress emissions is to use sample-and-hold devices. Emissions are assessed by their area (in pV*s).

In table 2 are given the most important characteristics some types of digital-to-analog converters.

table 2

DAC name Digit capacity, bit Number of channels Output type Setup time, µs Interface Internal ION Voltage power supply, V Power consumption mW Note
Wide range of DACs
572PA1 10 1 I 5 - No 5; 15 30 On MOS switches, multiplying
10 1 U 25 Last Eat 5 or +/-5 2
594PA1 12 1 I 3,5 - No +5, -15 600 On current switches
MAX527 12 4 U 3 Parall. No +/-5 110 Loading input words via 8-pin bus
DAC8512 12 1 U 16 Last Eat 5 5
14 8 U 20 Parall. No 5; +/-15 420 On MOS switches, with an inverse resistive matrix
8 16 U 2 Parall. No 5 or +/-5 120 On MOS switches, with an inverse resistive matrix
8 4 - 2 Last No 5 0,028 Digital potentiometer
Micropower DACs
10 1 U 25 Last No 5 0,7 Multiplying, in an 8-pin package
12 1 U 25 Parall. Eat 5 or +/-5 0,75 Multiplying, consumption - 0.2 mW in economy mode
MAX550V 8 1 U 4 Last No 2,5:5 0,2 Consumption 5 µW in economy mode
12 1 U 60 Last No 2,7:5 0,5 Multiplying, SPI-compatible interface
12 1 I 0,6 Last No 5 0,025 Multiplying
12 1 U 10 Last No 5 or 3 0.75 (5 h)
0.36 (3 h)
6-pin package, consumption 0.15 μW in economy mode. I 2 C-compatible interface
Precision DACs

Digital-to-analog converters have static and dynamic characteristics.

Static characteristics of the DAC

Main static characteristics DACs are:

· resolution;

· nonlinearity;

· differential nonlinearity;

· monotony;

· conversion factor;

· absolute full scale error;

· relative full scale error;

· zero offset;

absolute error

Resolution – this is the increment of U OUT when transforming adjacent values ​​D j, i.e. differing by one least significant unit (EMP). This increment is the quantization step. For binary conversion codes, the nominal value of the quantization step is

h = U PS /(2 N – 1),

where U PN is the nominal maximum output voltage of the DAC (full scale voltage), N is the bit capacity of the DAC. The higher the bit depth of the converter, the higher its resolution.

Full scale error – the relative difference between the real and ideal values ​​of the conversion scale limit in the absence of a zero offset, i.e.

It is the multiplicative component of the total error. Sometimes indicated by the corresponding EMP number.

Zero offset error – the value of U OUT when the DAC input code is zero. It is an additive component of the total error. Typically stated in millivolts or as a percentage of full scale:

Nonlinearity – maximum deviation of the actual conversion characteristic U OUT (D) from the optimal one (Fig. 5.2, line 2). The optimal characteristic is found empirically so as to minimize the value of the nonlinearity error. Nonlinearity is usually defined in relative units, but in the reference data it is also given in the EMP. For the characteristics shown in Fig. 5.2,

Differential nonlinearity – the maximum change (taking into account the sign) of the deviation of the actual transformation characteristic U OUT (D) from the optimal one when moving from one value of the input code to another adjacent value. Usually defined in relative units or in EMP. For the characteristics shown in Fig. 5.2,

Monotone conversion characteristics - increase (decrease) of the DAC output voltage (U OUT) with an increase (decrease) of the input code D. If the differential nonlinearity is greater than the relative quantization step h/U PN, then the converter characteristic is nonmonotonic.

The temperature instability of the DAC is characterized by temperature coefficients full scale errors and zero offset errors.

Full scale and zero offset errors can be corrected by calibration (tuning). Nonlinearity errors cannot be eliminated by simple means.

Dynamic characteristics of the DAC

TO dynamic characteristics am DACs include settling time and conversion time.

With a sequential increase in the values ​​of the input digital signal D(t) from 0 to (2 N – 1) through the least significant unit, the output signal U OUT (t) forms a stepped curve. This dependence is usually called the DAC conversion characteristic. In the absence of hardware errors, the midpoints of the steps are located on the ideal straight line 1 (see Fig. 5.2), which corresponds to the ideal conversion characteristic. The actual transformation characteristic may differ significantly from the ideal one in terms of the size and shape of the steps, as well as their location on the coordinate plane. There are a number of parameters to quantify these differences.

The dynamic parameters of the DAC are determined by the change in the output signal when the input code changes abruptly, usually from the value “all zeros” to “all ones” (Fig. 5.3).

Settling time – time interval from the moment of betrayal
input code (Fig. 5.3, t = 0) until the last time the equality is satisfied:

|U OUT – U ПШ | = d/2,

with d/2 usually corresponding to EMP.

Slew rate – maximum rate of change of U OUT (t) during the transient process. Defined as the increment ratio D U OUT to the time Dt during which this increment occurred. Usually specified in the technical specifications of a DAC with a voltage output signal. For digital-to-analog converters with current output, this parameter largely depends on the type of output op-amp.

For multiplying DACs with voltage output, the unity gain frequency and power bandwidth are often specified, which are mainly determined by the properties of the output amplifier.

Figure 5.4 shows two linearization methods, from which it follows that the linearization method to obtain minimum value D l shown in Fig. 5.4, ​​b, allows you to reduce the error D l by half compared to the linearization method at boundary points (Fig. 5.4, a).

For digital-to-analog converters with n binary digits in the ideal case (in the absence of conversion errors) analog output U OUT is related to the input binary number in the following way:

U OUT = U OP (a 1 2 -1 + a 2 2 -2 +…+ a n 2 -n),

where U OP is the reference voltage of the DAC (from the built-in or external source).

Since ∑ 2 -i = 1 – 2 -n, then with all bits turned on, the output voltage of the DAC is equal to:

U OUT (a 1 …a n) = U OP (1 – 2 -n) = (U OP /2 n) (2 n – 1) = D (2 n – 1) = U PS,

where U PN is the full scale voltage.

Thus, when all bits are turned on, the output voltage of the digital-to-analog converter, which in this case forms U PN, differs from the value of the reference voltage (U OP) by the value of the least significant digit of the converter (D), defined as

D = U OP /2 n.

When any i-th bit is turned on, the output voltage of the DAC will be determined from the relationship:

U OUT /a i = U OP 2 -i .

A digital-to-analog converter converts the digital binary code Q 4 Q 3 Q 2 Q 1 into an analog value, usually voltage U OUT. or current I OUT. Each bit of the binary code has a certain weight of the i-th bit twice as much as the weight of the (i-1)th one. The operation of the DAC can be described by the following formula:

U OUT = e (Q 1 1 + Q 2 2 + Q 3 4 + Q 4 8 +…),

where e is the voltage corresponding to the weight of the least significant digit, Q i is the value of the i-th digit of the binary code (0 or 1).

For example, the number 1001 corresponds to:

U OUT = e (1· 1 + 0 · 2 + 0 · 4 + 1 · = 9 · e,

and the number 1100 corresponds

U OUT = e (0· 1 + 0 · 2 + 1 · 4 + 1 · = 12 · e.

Analog-to-digital converters (ADCs) are devices that receive analog signals and produce an output digital signals, suitable for operation of computers and other digital devices. The conversion characteristic reflects the dependence of the output digital code on the input DC voltage. The transformation characteristic can be specified graphically, tabularly or analytically.

STATIC PARAMETERS

Intercode voltage– the point at which both adjacent code combinations are equally probable.

Quantization step– difference between adjacent values ​​of intercode transition voltages.

Zero offset voltage – parallel shift of the transformation characteristic relative to the abscissa axis.

Conversion factor deviation– error at the end of the transformation characteristic.

ADC non-linearity– Deviation of the actual value of the input voltage at a given point from the actual value determined by the linearized conversion characteristic at the same point. Expressed as a number of quantization steps or relative to the maximum input voltage as a percentage.

Differential nonlinearity– deviation of actual quantization steps from their average value.

DYNAMIC PARAMETERS OF ADC.

1. Sampling frequency - the frequency at which sample values ​​of the signal are generated, measured in the number of samples per second, or in hertz.

2. Conversion time – the time from the ADC start pulse or from the time of change in the analog input signal until a stable code appears at the output. For some ADCs this value depends on the input signal, for others it is constant. When working without UVH, this value is the aperture time.

3. Frequency error of the transmission coefficient - the error in the formation of sample values ​​when working with changing signals. Defined for a sinusoidal input signal. (For ADC K1107 PV2 8 bit, 80 MHz: P = 7 MHz at level 0.99).

4. Aperture time - the time during which uncertainty remains between the sample value and the time to which it refers. Consists of aperture shift and aperture uncertainty.

Depending on how the conversion process unfolds over time, ADCs are divided into:

1. Sequential

2. Parallel

3. Series - parallel.

SERIAL ADCs

ADC with step ramp voltage.

A positive voltage is supplied to the converter input. The counter is pre-set to zero, so the voltage at the DAC output is also 0. At the same time, logic 1 is set at the comparator output. The input of the 3I-NOT circuit receives pulses from the clock pulse generator. However, since log.0 is written to the R-S trigger, pulses do not pass to the counter input.

After the start pulse R-S trigger goes into a state with log.1 at the output and clock pulses begin to arrive at the counter input. The number recorded in the counter begins to increase and the voltage at the DAC output increases accordingly. At some point it is compared with the input voltage at the converter input, the comparator switches to log.0. and pulses stop arriving at the counter input. This signal from the comparator also arrives at the input of the RS trigger, switching it to the log.0 state at the output, which finally stops the conversion process. The resulting output code corresponds to the voltage at the low-order DAC output, or to the input analog signal with an accuracy of one. The process can then be repeated.

The minimum period of clock pulses can be found from the condition:

Cumin ≥ tcomp. + tdigit. + tDC + tRC, where:

tcomp – comparator response delay,

tdigits – counter delay,

tsap – DAC establishment time,

t RC – delay RC – chains.

Example. Let's calculate the conversion time of an ADC with 10 bits.

Elements used:

DAC – K572 PA1: number of bits N = 10, output voltage settling time tDC = 5 ∙ 10 -6 sec. At Vop = 10V quantization step

EMP = 10/(2 10 –1) = 10 mV.

COMPARATOR – 521 CA3 - at dV = 3 mV tcomp = 100 nsec.

We choose the time constant RC equal to 0.5 ∙ 10 -6 sec.

tdigit = 0.05 ∙ 10 -6 sec,

Cumin ≥ 0.1 + 0.05 + 5. 0 + 0.5 = 5.65 µs.

Maximum input signal measurement time:

(2 10 – 1) ∙ 5.65 ∙ 10 – 6 sec = 6 msec, sampling frequency is 160 Hz.

Aperture time – 6 ms.

ADCs of this type are used with UVH, or for converting slowly changing signals. The ADC error is determined by the accuracy parameters of the DAC used.

A variety of this type of ADC is tracking ADCs carry out the transformation continuously. They use an up/down counter and a comparator determines the direction of counting. At Vin< Vцап счетчик считает вверх, в при Vвх >The VDC counter counts down. Thus, the voltage Vdac constantly tends to be equal to Vin. Maximum speed input signal tracking is equal to: dVin./dt< ЕМР/ Тмин.


Successive approximation ADC.

The procedure for determining the output code is determined by the successive approximation register. At the beginning, log.0 is written to all bits of the register. The voltage at the DAC output is zero. Next, log.1 is written to the most significant bit of the register. If the output voltage of the DAC is still less than the input voltage (log.1 is set at the comparator output, then the value of the logical level in this bit is saved. If the voltage at the DAC output is greater than Vin, then this bit is reset to zero and then the log level is written. 1 to the next digit. In this way, the values ​​of all digits are determined, including the least significant one. After this, a readiness signal is issued and the measurement cycle can be repeated.

This type of DAC has a speed advantage over the previous DAC, so it is the most widely used. Its conversion time is equal to Tmin ∙ N.

Tmin – the minimum value of the clock pulse repetition period is determined similarly to the previous DAC, N – the number of bits.

Example: the integrated ADC 1108 PV2 has all the elements on the chip: DAC, reference voltage source, successive approximation register, clock generator, comparator. N = 12, minimum conversion time - 2 µs.

DAC with time-pulse conversion (linear coding method).

An ADC of this type uses the conversion of the measured voltage into a time interval proportional to it, which is filled with pulses of a reference frequency. This time interval is formed by a sawtooth voltage generator (RVG) and a comparator. The number of pulses is considered a counter which determines the ADC output code.

The performance of such a circuit is higher than that of a DAC with a stepped sawtooth voltage, since it does not have a DAC and is determined by the performance of the comparator and counter. The comparator turn-off time is selected subject to the overexcitation that provides the necessary error in comparing the input signal and the sawtooth voltage.

To reduce errors, the reference frequency generator and the GPG must be mutually stable.

The ADC is described: N = 10, f etal = 100 MHz, t convert. = 10 µsec.

ADC with push-pull integration.

The disadvantage of the sequential ADCs discussed above is their relatively low noise immunity, which limits their resolution. An increase in the number of bits is associated with the use of high-precision DACs, which makes the production of such ADCs more expensive.

The principle of double integration in an ADC allows one to largely get rid of these shortcomings. The full cycle of its operation consists of two cycles. In the first, the input voltage is integrated using an analog integrator over a fixed time interval T0. This time interval is formed by a counter, the input of which receives pulses from a generator with a frequency fsch.

Interval T0 is equal to:

Т0 = Nmax ∙ tсч

Here tcount = 1/fc is the frequency period of the clock generator, Nmax is the maximum counter capacity, which determines the resolution of the ADC.

The charge on capacitor C will then be equal to:

In the second cycle, the capacitor is discharged from the reference voltage source Vref. The polarity of the reference voltage is opposite to the polarity of the input signal, so the voltage across capacitor C begins to decrease. The counter counts the generator pulses at this time clock frequency fcount, starting from the zero state. At the point in time when the comparator passes through zero, counting stops and the number is written to the output register. The charge q2 that discharged the capacitor is equal to.