Static and dynamic characteristics of measuring instruments. Development and description of a system of measuring channels for determining static and dynamic characteristics. Basic parameters and errors of the DAC

Classification of measuring instruments

Measuring instruments and their characteristics

The concept of a measuring instrument was presented in paragraph 1.2 as one of the fundamental concepts of metrology. It was noted that a measuring instrument (MI) is a special technical device that stores a unit of quantity, allows one to compare the measured quantity with its unit and has standardized metrological characteristics, i.e. characteristics that influence the results and accuracy of measurements.

Let us classify SI according to the following criteria:

§ according to the method of implementing the measuring function;

§ By design;

§ for metrological purposes.

According to the method of implementing the measuring function, all measuring instruments can be divided into two groups:

§ reproducing the value of a given (known) size (for example: weight - mass; ruler - length; normal element - emf, etc.);

§ generating a signal (indication) that carries information about the value of the measured quantity.

The classification of measuring instruments by design is shown in the diagram in Fig. 4.1.

Measure– a measuring instrument in the form of a body or device designed to reproduce a physical quantity of one or more sizes, the values ​​of which are known with the accuracy required for measurement. Measure is the basis of measurement. The fact that in many cases measurements are made using measuring instruments or other devices does not change anything, since many of them include measures, others are “graded” using measures; their scales can be thought of as a storage device. And finally, there are measuring instruments (for example, cup scales) that can only be used with measures.


Rice. 4.1. Classification of measuring instruments by design.

Measuring device– a measuring instrument designed to generate a signal of measuring information in a form accessible to direct perception by an observer. Depending on the form of information presentation, analogue and digital devices are distinguished. Analog instruments are instruments whose readings are a continuous function of the quantity being measured, e.g. pointer device, glass thermometer, etc.

Figure 4.2 shows a generalized block diagram of a measuring device with a pointer indicating device.

A mandatory element of the measuring device is reading device– part of the design of a measuring instrument intended for reading the value of the measured quantity. The reading device of a digital measuring device is a digital display.


Rice. 4.2. Structural scheme measuring device.

The reading device of an analog measuring instrument usually consists of a pointer and a scale. The scale has an initial and final value, within which the range of readings is located (Fig. 4.3).


Rice. 4.3. Reading device of an analog indicating device.

Measuring setup– a set of functionally integrated measuring instruments, in which one or more measuring devices are used to convert the measured value into a signal.

The measuring installation may include measuring instruments, measures, converters, as well as auxiliary devices, regulators, and power supplies.

Measuring system– a set of measuring instruments and auxiliary devices, interconnected by communication channels, designed to generate measurement information signals in a form convenient for automatic processing, transmission and use in monitoring and control systems.

Transducer– a measuring instrument designed to convert measurement information signals from one type to another. Depending on the types of input and output signals, measuring transducers are divided into:

§ primary transducers or sensors;

§ secondary converters.

Primary converters– a measuring transducer, the input of which is supplied with the measured physical quantity. The primary transducer is the first in the measuring chain.

The output signal of the primary transducer cannot be directly perceived by the observer. To transform it into a form accessible to direct observation, another stage of transformation is necessary. An example of a primary transducer is a resistance thermometer, which converts temperature into electrical resistance conductor. Another example of a primary converter is the orifice of variable pressure flow meters, which converts flow into differential pressure.

Secondary device– a converter, the input of which is supplied with the output signal of the primary or normalizing converter. The output signal of the secondary device, like the signal of the measuring device, is available for direct perception by the observer. The secondary device closes the measuring circuit.

Normalizing converter– an intermediate converter installed between the primary converter and the secondary device in case of inconsistency between the output signal of the primary converter and input signal secondary device. An example of a normalizing converter is a normalizing bridge that converts information signal variable resistance into a unified signal direct current 0-5 mA or 0-20 mA.

The use of such normalizing converters allows the use of unified milliammeters for all measured physical quantities as secondary devices, which improves the ergonomic qualities and design of control panels.

Scale converter– a measuring transducer that serves to change by a certain number of times the value of one of the quantities acting in the circuit of the measuring device, without changing its physical nature. These are voltage and current measuring transformers, measuring amplifiers, etc.

According to their metrological purpose, all measuring instruments are divided into standards and working measuring instruments. The classification of measuring instruments by metrological purpose was given in detail in paragraph 2.2. “The procedure for transferring the sizes of units of physical quantities.”


Rice. 4.4 Static and dynamic characteristics measuring instruments.

As noted above, measurements are divided into static and dynamic. Let us consider the metrological properties of measuring instruments that characterize the result of measuring constant and time-varying quantities. Figure 4.4 shows the classification of characteristics that reflect these properties.

Static characteristic measuring instruments call the functional relationship between the output quantity y and input quantity x in steady state y = f(x). This dependence is also called the instrument scale equation, the calibration characteristic of the instrument or converter. The static characteristic can be specified:

Analytically;

Graphically;

In the form of a table.

IN general case the static characteristic is described by the dependence:

y = y n + S (x – x n), (4.1)

Where u n, x ninitial value output and input quantities; y, x– current value of the output and input quantities; S– sensitivity of the measuring instrument.

Measuring instrument error() is the difference between the SI reading and the true (actual) value of the measured physical quantity. The error and its various components are the main standardized characteristic of the SI.

Sensitivity of measuring instrument(S)– a property that can be quantitatively defined as the limit of the ratio of the increment of the output value D at to the increment of the input value D X:

Figure 4.5 shows examples of static characteristics of measuring instruments: A) And b) – linear, V) – nonlinear. The linearity of the static characteristic is important property measuring instrument for ease of use.

Nonlinearity of the static characteristic, especially for technical means measurements is allowed only when it is determined by the physical principle of transformation.

It should be noted that for most measuring instruments, especially for primary transducers, the static characteristic can be considered linear only within the required accuracy of the measuring instrument.

The linear static characteristic has a constant sensitivity, independent of the value of the measured quantity. In the case of a linear static characteristic, sensitivity can be determined by the formula:

Where y k, x k– final value of the output and input quantities; y d = y k – y n– range of output signal variation; x d = x k – x n– range of input signal variation.

x

x
X
u n

A) b) V)

Rice. 4.5. Static characteristics of measuring instruments:

a), b)– linear; V)- nonlinear

Measuring range– range of values ​​of the measured quantity, within which the permissible error limits of measuring instruments are normalized. The measuring range of a meter is always less than or equal to the reading range.

The concept of transmission coefficient applies to individual elements of measuring systems that perform the functions of directional transmission, scaling or normalization of measuring signals.

Transmission coefficient( To) is called the ratio of the output quantity y to the input quantity x, i.e. k = y/x. The transmission coefficient, as a rule, has a constant value at any point in the converter range, and the listed types of converters (scaling, normalizing) have a linear static characteristic.

Dynamic characteristics called functional dependence readings of measuring instruments from changes in the measured value at each moment of time, i.e. y(t) = f.

Output deviation y(t) from the input value x(t) in dynamic mode is shown in Fig. 4.6 depending on the law of change of the input quantity over time.

Dynamic error measuring instruments is defined as

Dу(t) =y(t) – to x(t),(4.4)

Where kx(t)– output value of a dynamically “ideal” converter.

The dynamic mode of a wide class of measuring instruments is described by linear inhomogeneous differential equations with constant coefficients. The dynamic properties of measuring instruments in thermal power engineering are most often modeled by a first-order dynamic link (aperiodic link):

where T – conversion time constant, which shows the signal output time y(t) to a steady value after a step change in the input value x(t).

Rice. 4.6. Deviation of the output value from the input value in dynamic mode

To describe the dynamic properties of measuring instruments, transient characteristics are used. The transient response represents the response dynamic system for a single step effect. In practice, step effects of arbitrary value are used:

Step response h(t) associated with the response of a linear dynamic system y(t) on a real non-unit step impact by the relation:

h(t)=y(t)/X a(4.7)

The transient response describes the inertia of the measurement, causing delay and distortion of the output signal. The transient response can have aperiodic and oscillatory forms.

Dynamic characteristics linear means measurements do not depend on the value and sign of the step disturbance, and the transient characteristics measured experimentally at different values ​​of the step disturbance must coincide. If experiments with stepwise disturbances of different magnitude and sign lead to unequal quantitative and qualitative results, then this indicates the nonlinearity of the measuring instrument being studied.

Dynamic characteristics of measuring instruments, characterizing the response of the measuring instrument to harmonic influences in wide range frequencies are called frequency characteristics which include amplitude-frequency and phase-frequency characteristics.

When experimentally determining frequency characteristics, harmonic, for example, sinusoidal oscillations are applied to the input of the measuring instrument using a generator:

x(t) = A x sin(w t + f x)(4.8)

If the measuring instrument under study is a linear dynamic system, then the oscillations of the output value in steady state will also be sinusoidal (see Fig. 4.6, c):

y(t) = A y sin(wt + f y), (4.9)

Where f x- initial phase, rad: w- angular velocity, rad/s.

The amplitude of the output oscillations and the phase shift depend on the properties of the measuring instrument and the frequency of the input oscillations.

Addiction A(w), showing how the ratio of the amplitude of output oscillations changes with frequency Ay(w) linear dynamic system to the amplitude of input oscillations Ax(w), is called the amplitude-frequency response (AFC) of this system:

A(w) = A y (w)/A x (w) (4.10)

The frequency dependence of the phase shift between input and output oscillations is called the phase-frequency response (PFC) of the system:

f(w) = f y (w) – f x (w)(4.11)

Frequency characteristics are determined both experimentally and theoretically, using a differential equation that describes the relationship between the output and input signal (4.5). The procedure for obtaining frequency characteristics using a differential equation linear system described in detail in the literature on the theory of automatic control.

Figure 4.7 shows typical frequency characteristics for a measuring instrument whose dynamic properties correspond to a first-order linear differential equation (4.5). As the frequency of the input signal increases, such a measuring device usually reduces the amplitude of the output signal, but increases the shift of the output signal relative to the input signal, which leads to an increase in the dynamic error.

Rice. 4.7. Amplitude-frequency (a) and phase-frequency (b) characteristics of a measuring instrument, the dynamic properties of which correspond to a first-order linear link (apriodic link).

Let us show by example how to evaluate the dynamic characteristics of measuring instruments, the dynamic properties of which can be modeled by a 1st order linear link.

Example. Time Constant Calculation T thermal receiver.

Rice. 4.8. Schematic diagram and dynamic characteristics of the thermal receiver

Thermal inertia of the thermodetector is due to slower heating compared to a rapid (abrupt) change in the temperature of the medium, which leads to a lag in the readings of the temperature measuring device.

The dynamic error of the thermal receiver is determined

Where s, r, S– heat capacity, density, volume and surface area of ​​the heat receiver; a is the heat transfer coefficient; t avg And t pr– temperature of the medium and temperature sensor.

The time constant of the thermal receiver is determined by the condition t pr (T)=0,63(t av -t n) and is equal to

Where d- thickness of the walls of the thermal receiver cover.

Let it be given: r= 7×10 3 kg/m 3 ; With= 0.400 kJ/kg×deg; a= 200 W/m 2 ×deg; d= 2.0 mm.

Estimated time constant:

If the ambient temperature t avg= 520 o C is measured by an electronic potentiometer with an error of D = ±5 o C, then the time of establishment of the instrument readings T y is determined

Send your good work in the knowledge base is simple. Use the form below

Good work to the site">

Students, graduate students, young scientists who use the knowledge base in their studies and work will be very grateful to you.

  • CONTENT 2
  • INconducting 3
  • 1. Technical task 6
  • 2. Development and description of a system of measuring channels for determining static and dynamic characteristics 8
  • 2.1 Development of the principle of selection and standardization of static and dynamic characteristics of measuring channels of measuring instruments 8
  • 2.2 Development of complexes of standardized metrological characteristics 12
  • 3. DEVELOPMENT OF METROLOGICAL MEASURING INSTRUMENTS 16
  • 3.1 Development of metrological reliability of measuring instruments. 16
  • 3.2 Changes in metrological characteristics of means 19
  • measurements during operation 19
  • 3.3 Development of metrological standardization models 22
  • characteristics 22
  • 4. CLASSIFICATION OF SIGNALS 26
  • 5. Channel development 30
  • 5.1Development of a channel model 30
  • 5.2 Development of a measuring channel model 30
  • LITERATURE 35

Introduction

One of the main forms of state metrological supervision and departmental control aimed at ensuring the uniformity of measurements in the country, as mentioned earlier, is the verification of measuring instruments. Instruments released from production and repair, received from abroad, as well as those in operation and storage are subject to verification. The basic requirements for the organization and procedure for verification of measuring instruments are established by GOST “GSI. Verification of measuring instruments. Organization and procedure.” The term “verification” was introduced by GOST “GSI. Metrology. Terms and definitions” as “the determination by a metrological body of the errors of a measuring instrument and the establishment of its suitability for use.” In some cases, during verification, instead of determining error values, they check whether the error is within acceptable limits. Thus, verification of measuring instruments is carried out to establish their suitability for use. Those measuring instruments are considered suitable for use during a certain verification interval, the verification of which confirms their compliance with the metrological and technical requirements for this measuring instrument. Measuring instruments are subjected to primary, periodic, extraordinary, inspection and expert verification. Instruments undergo primary verification upon release from production or repair, as well as instruments received for import. Instruments in operation or storage are subject to periodic verification at certain calibration intervals established to ensure the suitability of the instrument for use for the period between verifications. Inspection verification is carried out to determine the suitability for use of measuring instruments in the implementation of state supervision and departmental metrological control over the condition and use of measuring instruments. Expert verification is performed when controversial issues on metrological characteristics (MX), serviceability of measuring instruments and their suitability for use. Metrological certification is a set of activities to study the metrological characteristics and properties of a measuring instrument in order to make a decision on the suitability of its use as a reference instrument. Usually for metrological certification they are special program works, the main stages of which are: experimental determination of metrological characteristics; analysis of the causes of failures; establishing a verification interval, etc. Metrological certification of measuring instruments used as reference ones is carried out before commissioning, after repair and, if necessary, changing the category of the reference measuring instrument. The results of metrological certification are documented with appropriate documents (protocols, certificates, notices of the unsuitability of the measuring instrument). The characteristics of the types of measuring instruments used determine the methods for their verification.

In the practice of calibration laboratories, various methods for calibrating measuring instruments are known, which for unification are reduced to the following:

* direct comparison using a comparator (i.e. using comparison tools);

* direct measurement method;

* method of indirect measurements;

* method of independent verification (i.e. verification of measuring instruments relative values, which does not require transfer of unit sizes).

Verification of measuring systems is carried out by state metrological bodies called the State Metrological Service. The activities of the State Metrological Service are aimed at solving scientific and technical problems of metrology and implementing the necessary legislative and control functions, such as: establishing units of physical quantities approved for use; creation of exemplary measuring instruments, methods and measuring instruments highest precision; development of all-Union verification schemes; determination of physical constants; development of measurement theory, error estimation methods, etc. The tasks facing the State Metrological Service are solved with the help of the State System for Ensuring the Uniformity of Measurements (GSI). The state system for ensuring the uniformity of measurements is the regulatory and legal basis for metrological support of scientific and practical activities in terms of assessing and ensuring measurement accuracy. It is a set of regulatory and technical documents that establish a unified nomenclature, methods for presenting and assessing the metrological characteristics of measuring instruments, rules for standardization and certification of measurements, registration of their results, requirements for state tests, verification and examination of measuring instruments. Main regulatory and technical documents state system ensuring the uniformity of measurements are state standards. On the basis of these basic standards, regulatory and technical documents are developed that specify the general requirements of the basic standards for various industries, measurement areas and measurement methods.

1. Technical specifications

1.1 Development and description of a system of measuring channels to determine static and dynamic characteristics.

1.2 Materials of scientific and methodological developments of the ISIT department

1.3 Purpose and purpose

1.3.1 This system is intended to determine the characteristic instrumental components of measurement errors.

1.3.2 Develop a measuring system information system allowing you to automatically receive necessary information, process and issue it in the required form.

1.4 System requirements

1.4.1 The rules for selecting sets of standardized metrological characteristics for measuring instruments and methods for their standardization are determined by the GOST 8.009 - 84 standard.

1.4.2 Set of standardized metrological characteristics:

1. measures and digital-to-analog converters;

2. measuring and recording instruments;

3. analog and analog-to-digital measuring converters.

1.4.3 Instrumental error of the first model of normalized metrological characteristics:

Random component;

Dynamic error;

1.4.4 Instrumental error of the second model of normalized metrological characteristics:

where is the main SI error without breaking it into components.

1.4.5 Compliance of models of standardized metrological characteristics with GOST 8.009-84 on the formation of complexes of standardized metrological characteristics.

2. Development and description of a system of measuring channels to determine static and dynamic characteristics

2.1 Development of the principle of selection and standardization of static and dynamic characteristics of measuring channels of measuring instruments

When using SI, it is fundamentally important to know the degree to which the information being measured, contained in the output signal, corresponds to its true value. For this purpose, certain metrological characteristics (MX) are introduced and standardized for each SI.

Metrological characteristics are characteristics of the properties of a measuring instrument that influence the measurement result and its errors. The characteristics established by regulatory and technical documents are called standardized, and those determined experimentally are called valid. The MX nomenclature, the rules for selecting standardized MX complexes for measuring instruments and the methods for their standardization are determined by the GOST 8.009-84 standard "GSI. Standardized metrological characteristics of measuring instruments."

Metrological characteristics of SI allow:

determine measurement results and calculate estimates of the characteristics of the instrumental component of the measurement error in real conditions of SI application;

calculate MX channels of measuring systems consisting of a number of measuring instruments with known MX;

make the optimal choice of measuring instruments that provide the required quality of measurements under known conditions of their use;

compare SI various types taking into account the conditions of use.

When developing principles for the selection and standardization of measuring instruments, it is necessary to adhere to a number of provisions outlined below.

1. The main condition for the possibility of solving all of the listed problems is the presence of an unambiguous connection between normalized MX and instrumental errors. This connection is established through a mathematical model of the instrumental component of the error, in which the normalized MX must be arguments. It is important that the MX nomenclature and methods of expressing them are optimal. Experience in operating various SIs shows that it is advisable to normalize the MX complex, which, on the one hand, should not be very large, and on the other hand, each standardized MX must reflect the specific properties of the SI and, if necessary, can be controlled.

Standardization of MX measuring instruments should be carried out on the basis of uniform theoretical premises. This is due to the fact that measuring instruments based on different principles can participate in measurement processes.

Normalized MX must be expressed in such a form that with their help it is possible to reasonably solve almost any measurement problems and at the same time it is quite simple to control the measuring instruments for compliance with these characteristics.

Normalized MX must provide the possibility of statistical integration and summation of the components of the instrumental measurement error.

In general, it can be defined as the sum (combination) of the following error components:

0 (t), due to the difference between the actual conversion function under normal conditions and the nominal one assigned by the relevant documents to this type of SI. This error is called the main error, caused by the reaction of the SI to changes in external influencing quantities and informative parameters of the input signal relative to their nominal values. This error is called additional;

dyn, caused by the reaction of the SI to the rate (frequency) of change of the input signal. This component, called dynamic error, depends both on the dynamic properties of the measuring instruments and on the frequency spectrum of the input signal;

int , caused by the interaction of the measuring instrument with the measurement object or with other measuring instruments connected in series with it in the measuring system. This error depends on the characteristics of the parameters of the SI input circuit and the output circuit of the measurement object.

Thus, the instrumental component of the SI error can be represented as

where * is a symbol for the statistical combination of components.

The first two components represent the static error of the SI, and the third is the dynamic one. Of these, only the main error is determined by the properties of SI. Additional and dynamic errors depend both on the properties of the SI itself and on some other reasons ( external conditions, parameters measuring signal and etc.).

The requirements for the universality and simplicity of the statistical combination of the components of the instrumental error determine the need for their statistical independence - non-correlation. However, the assumption of the independence of these components is not always true.

Isolating the dynamic error of the SI as a summed component is permissible only in a particular, but very common case, when the SI can be considered a linear dynamic link and when the error is a very small value compared to the output signal. A dynamic link is considered linear if it is described by linear differential equations with constant coefficients. For SI, which are essentially nonlinear links, separating static and dynamic errors into separately summable components is unacceptable.

Normalized MX must be invariant to the conditions of use and operating mode of the SI and reflect only its properties.

The choice of MX must be carried out so that the user has
the ability to calculate SI characteristics from them under real operating conditions.

Standardized MX, given in the regulatory and technical documentation, reflect the properties not of a single instance of measuring instruments, but of the entire set of measuring instruments of this type, i.e. are nominal. A type is understood as a set of measuring instruments that have the same purpose, layout and design and satisfy the same requirements regulated in the technical specifications.

The metrological characteristics of an individual SI of this type can be any within the range of nominal MX values. It follows that the MX of a measuring instrument of this type should be described as a non-stationary random process. A mathematically strict account of this circumstance requires normalization of not only the limits of MX as random variables, but also their time dependence (i.e., autocorrelation functions). This will lead to extremely complex system rationing and the practical impossibility of controlling MX, since in this case it would have to be carried out at strictly defined intervals. As a result, a simplified standardization system was adopted, providing a reasonable compromise between mathematical rigor and the necessary practical simplicity. In the adopted system, low-frequency changes in the random components of the error, the period of which is commensurate with the duration of the verification interval, are not taken into account when normalizing MX. They determine the reliability indicators of measuring instruments, determine the choice of rational calibration intervals and other similar characteristics. High-frequency changes in the random components of the error, the correlation intervals of which are commensurate with the duration of the measurement process, must be taken into account by normalizing, for example, their autocorrelation functions.

2.2 Development of complexes of standardized metrological characteristics

The wide variety of SI groups makes it impossible to regulate specific MX complexes for each of these groups in one regulatory document. At the same time, all SI cannot be characterized by a single set of normalized MX, even if it is presented in the most general form.

The main feature of dividing measuring instruments into groups is the commonality of the complex of standardized MXs necessary to determine the characteristic instrumental components of measurement errors. In this case, it is advisable to divide all measuring instruments into three large groups, presented according to the degree of complexity of MX: 1) measures and digital-to-analog converters; 2) measuring and recording instruments; 3) analog and analog-to-digital measuring converters.

When establishing a set of standardized MX adopted next model instrumental component of measurement error:

where by symbol<< * >> indicates the combination of the SI error in real conditions of its use and the error component int, caused by the interaction of the SI with the measurement object. By combining we mean applying some functionality to the components, which allows us to calculate the error caused by their joint influence. In each case, the functionality is determined based on the properties of a specific SI.

The entire MX population can be divided into two large groups. In the first of them, the instrumental component of the error is determined by statistically combining its individual components. In this case, the confidence interval in which the instrumental error lies is determined with a given confidence probability less than one. For MX of this group, the following error model is adopted in real application conditions (model 1):

where is the systematic component;

Random component;

Random component due to hysteresis;

Combining additional errors;

Dynamic error;

L is the number of additional errors, equal to all quantities that significantly affect the error in real conditions.

Depending on the properties of a given type of SI and the operating conditions of its use, individual components may be missing.

The first model is selected if it is accepted that the error occasionally exceeds the value calculated from the standardized characteristics. In this case, using the MX complex, it is possible to calculate point and interval characteristics in which the instrumental component of the measurement error is found with any given confidence probability, close to unity, but less than it.

For the second MX group, statistical aggregation of components is not applied. Such measuring instruments include laboratory means, as well as most standard means, the use of which does not involve repeated observations with averaging of results. Instrumental error in in this case is defined as the arithmetic sum of the largest possible values ​​of its components. This estimate gives a confidence interval with a probability equal to one, which is the upper limit estimate of the desired error interval, covering all possible, including very rarely realized, values. This leads to a significant tightening of the requirements for MX, which can only be applied to the most critical measurements, for example those related to the health and life of people, with the possibility of catastrophic consequences of incorrect measurements, etc.

Arithmetic summation of the largest possible values ​​of the components of the instrumental error leads to the inclusion in the complex of normalized MX limits of permissible error, and not statistical moments. This is also acceptable for measuring instruments that have no more than three components, each of which is determined by a separate standardized MX. In this case, the calculated estimates of the instrumental error obtained by arithmetic combining the largest values ​​of its components and statistical summation of the characteristics of the components (with a probability, although smaller, but quite close to unity), will practically not differ. For the case under consideration, SI error model 2:

Here is the main SI error without breaking it into components (unlike model 1).

3. DEVELOPMENT OF METROLOGICAL MEASURING INSTRUMENTS

3.1 Development of metrological reliability of measuring instruments.

Model 2 is applicable only for those SIs whose random component is negligible.

The issues of choosing MX are regulated in sufficient detail in GOST 8.009-84, which shows the characteristics that should be standardized for the above-mentioned SI groups. The above list can be adjusted for a specific measuring instrument, taking into account its features and operating conditions. It is important to note that one should not normalize those MXs that make an insignificant contribution to the instrumental error compared to others. Determination of whether a given error is important or not is made on the basis of the materiality criteria given in GOST 8.009-84.

During operation, the metrological characteristics and parameters of the measuring instrument undergo changes. These changes are random, monotonous or fluctuating in nature and lead to failures, i.e. to the inability of the SI to perform its functions. Failures are divided into non-metrological and metrological.

Non-metrological is a failure caused by reasons not related to changes in the MX of the measuring instrument. They are mainly of an obvious nature, appear suddenly and can be detected without verification.

Metrological is a failure caused by MX leaving the established permissible limits. As studies have shown, metrological failures occur much more often than non-metrological ones. This necessitates the development of special methods for their prediction and detection. Metrological failures are divided into sudden and gradual.

Sudden failure is a failure characterized by abrupt change one or more MX. These failures, due to their randomness, cannot be predicted. Their consequences (failure of readings, loss of sensitivity, etc.) are easily detected during operation of the device, i.e. by the nature of their manifestation they are obvious. A feature of sudden failures is the constancy of their intensity over time. This makes it possible to apply classical reliability theory to analyze these failures. In this regard, failures of this kind will not be considered further.

Gradual failure is a failure characterized by a monotonous change in one or more MXs. By the nature of their manifestation, gradual failures are hidden and can only be detected based on the results of periodic monitoring of the measuring instruments. In the following, it is these types of failures that will be considered.

The concept of metrological serviceability of a measuring instrument is closely related to the concept of “metrological failure”. It refers to the state of the SI in which all standardized MX correspond to the established requirements. SI's ability to preserve set values metrological characteristics for a given time at certain modes and operating conditions is called metrological reliability. The specificity of the problem of metrological reliability is that for it the main position of the classical reliability theory about the constancy of the failure rate over time turns out to be unlawful. Modern reliability theory is focused on products that have two characteristic states: operational and inoperative. A gradual change in the SI error makes it possible to introduce as many operational states as desired with different levels operating efficiency, determined by the degree of approximation of the error to the permissible limit values.

The concept of metrological failure is to a certain extent conditional, since it is determined by the MX tolerance, which in general can vary depending on specific conditions. It is also important what to record exact time The occurrence of a metrological failure due to the hidden nature of its manifestation is impossible, while obvious failures, which the classical theory of reliability deals with, can be detected at the moment of their occurrence. All this required the development of special methods for analyzing the metrological reliability of SI.

The reliability of a measuring instrument characterizes its behavior over time and is a generalized concept that includes stability, reliability, durability, maintainability (for recoverable measuring instruments) and storability.

The stability of SI is a qualitative characteristic reflecting the constancy of its MX over time. It is described by the time dependences of the parameters of the error distribution law. Metrological reliability and stability are different properties of the same SI aging process. Stability brings more information on the constancy of the metrological properties of the measuring instrument. This is, as it were, its “internal” property. Reliability, on the contrary, is an “external” property, since it depends both on stability and on the accuracy of measurements and the values ​​of the tolerances used.

Reliability is the property of an SI to continuously maintain an operational state for some time. It is characterized by two states: operational and inoperative. However, for complex measuring systems there may also be larger number states, since not every failure leads to a complete cessation of their functioning. Failure is a random event associated with disruption or cessation of SI performance. This determines the random nature of failure-free indicators, the main one of which is the distribution of failure-free operation time of the SI.

Durability is the property of an SI to maintain its operational state until a limiting state occurs. An operational state is a state of the SI in which all its MX correspond to the normalized values. A limiting state is a state of SI in which its use is unacceptable.

After a metrological failure, the SI characteristics can be returned to acceptable ranges through appropriate adjustments. The process of making adjustments can be more or less lengthy depending on the nature of the metrological failure, the design of the measuring instruments and a number of other reasons. Therefore, the concept of “maintainability” was introduced into the reliability characteristic. Maintainability is a property of measuring equipment, which consists in its adaptability to preventing and detecting the causes of failures, restoring and maintaining it working condition by Maintenance and repairs. It is characterized by the expenditure of time and money to restore the measuring instrument after a metrological failure and maintain it in working condition.

As will be shown below, the process of changing MX is continuous, regardless of whether the SI is in use or is stored in a warehouse. The property of an SI to preserve the values ​​of indicators of reliability, durability and maintainability during and after storage and transportation is called its persistence.

3.2 Changes in metrological characteristics of means

measurements during operation

The metrological characteristics of SI may change during operation. In what follows, we will talk about changes in error (t), implying that any other MX can be considered in a similar way instead.

It should be noted that not all error components are subject to change over time. For example, methodological errors depend only on the measurement technique used. Among the instrumental errors there are many components that are practically not subject to aging, for example, the size of the quantum in digital devices and the quantization error determined by it.

The change in MX of measuring instruments over time is due to aging processes in its nodes and elements caused by interaction with the external environment. These processes occur mainly at the molecular level and do not depend on whether the measuring instrument is in operation or stored for conservation. Consequently, the main factor determining the aging of measuring instruments is the calendar time that has passed since their manufacture, i.e. age. The rate of aging depends primarily on the materials and technologies used. Research has shown that irreversible processes that change the error occur very slowly and in most cases it is impossible to record these changes during the experiment. In this regard, various mathematical methods are of great importance, on the basis of which models of error changes are built and metrological failures are predicted.

The problem solved when determining the metrological reliability of measuring instruments is to find the initial changes in MX and construct a mathematical model that extrapolates the results obtained over a large time interval. Since the change in MX over time is a random process, the main construction tool mathematical models is the theory of random processes.

The change in SI error over time is a non-stationary random process. Many of its implementations are shown in Fig. 1 in the form of error modulus curves. At each moment t i they are characterized by a certain probability density distribution law p(, t i) (curves 1 and 2 in Fig. 2a). In the center of the strip (curve cp (t)) the highest density of errors is observed, which gradually decreases towards the boundaries of the strip, theoretically tending to zero at an infinite distance from the center. Upper and lower limit SI error bars can be presented only in the form of some quantile boundaries, within which most of the errors are contained, realized with a confidence probability P. Outside the boundaries with probability (1 - P)/2 there are errors that are most distant from the center of implementations.

To apply a quantile description of the boundaries of the error band in each of its sections t i, it is necessary to know the estimates of the mathematical expectation cp (t i) and the standard deviation of individual implementations. The error value at the boundaries in each section t i is equal to

r (t i) = cp (t) ± k(t i),

where k is a quantile factor corresponding to a given confidence probability P, the value of which significantly depends on the type of error distribution law across sections. It is practically impossible to determine the type of this law when studying SR aging processes. This is due to the fact that distribution laws can undergo significant changes over time.

Metrological failure occurs when the straight line curve intersects ± etc. Failures can occur at various times in the range from t min to t max (see Fig. 2, a), and these points are the intersection points of the 5% and 95% quantiles with line permissible value errors. When the curve (t) reaches the permissible limit, 5% of devices experience metrological failure. The distribution of the moments of occurrence of such failures will be characterized by the probability density p H (t), shown in Fig. 2, b. Thus, as a model of a nonstationary random process of change in time of the SI error module, it is advisable to use the dependence of the change in time of the 95% quantile of this process.

Indicators of accuracy, metrological reliability and stability of an SI correspond to various functionals built on the trajectories of changes in its MX (t). The accuracy of the SI is characterized by the value MX at the considered moment in time, and for the set of measuring instruments - by the distribution of these values, represented by curve 1 for the initial moment and curve 2 for the moment t i. Metrological reliability is characterized by the distribution of times when metrological failures occur (see Fig. 2,b). SI stability is characterized by the distribution of MX increments over a given time.

3.3 Development of metrological standardization models

characteristics

The MX standardization system is based on the principle of adequacy of the measurement error estimate and its actual value, provided that the estimate actually found is an estimate “from above.” The last condition is explained by the fact that an estimate “from below” is always more dangerous, since it leads to greater damage from unreliable measurement information.

This approach is quite understandable, taking into account that accurate normalization of MX is impossible due to the many influencing factors that are not taken into account (due to their ignorance and the lack of a tool for identifying them). Therefore, rationing, to a certain extent, is an act of will when reaching a compromise between desire full description measurement characteristics and the ability to carry this out in real conditions under known experimental and theoretical limitations and the requirements of simplicity and clarity of engineering methods. In other words, complex methods MX descriptions and normalizations are not viable

The consumer receives information about standard MX from the technical documentation on the SI and only in extremely rare, exceptional cases independently carries out experimental study individual characteristics of SI. Therefore, it is very important to know the relationship between MX SI and instrumental measurement errors. This would allow, knowing one complex MX SI, to directly find the measurement error, eliminating one of the most labor-intensive and complex tasks summing up the components of the total measurement error. However, this is hampered by one more circumstance - the difference between the MX of a particular SI and the metrological properties of many of the same SIs. For example, systematic error of a given SI is a deterministic quantity, and for a set of SI it is a random quantity. The NMX complex must be installed based on the requirements real conditions operation of specific measuring instruments. On this basis, it is advisable to divide all SI into two functional categories. For the first and third groups of SI, the characteristics of interaction with devices connected to the input and output of the SI, and non-informative parameters of the output signal should be normalized. In addition, for the third group the nominal transformation function f nom (x) must be normalized (in the SI of the second group it will be replaced by a scale or other calibrated reading device) and full dynamic characteristics. The indicated characteristics for SI of the second group do not make sense, with the exception of recording instruments for which it is advisable to normalize complete or partial dynamic characteristics

The most common forms of recording the DSI accuracy class are:

where c and d are constant coefficients according to formula (3.6); x k - final value of the measurement range; x - current value;

where b= d; a = c-b;

3) symbolic notation, characteristic of foreign CRCs,

op = ± ,

GOST 8.009 - 84 provides two main models (Ml and MP) for the formation of NMX complexes, corresponding to two models of the occurrence of SI errors, based on the statistical combination of these errors.

The model is applicable for SI, the random component of the error of which can be neglected. This model includes the calculation of the largest possible values ​​of the components of the SI error to guarantee the probability P = 1 of preventing the SI error from going beyond the calculated limits. Model II is used for the most critical measurements related to taking into account technical and economic factors, possible catastrophic consequences, threats to human health, etc. When the number of components exceeds three, this model gives a rougher (due to the inclusion of rarely occurring components), but reliable estimate “from above” of the main SI error.

Model 1 gives a rational estimate of the main SI error with probability P<1 из-за пренебрежения редко реализующимися составляющими погрешности.

Thus, the NMX complex for error models I and II provides for the statistical integration of individual error components, taking into account their significance.

However, for some SIs such statistical unification is impractical. These are precise laboratory industrial (in technological processes) measuring instruments that measure slowly changing processes under conditions close to normal, exemplary measuring instruments, when used, repeated observations with averaging are not performed. In such instruments, their main error or the arithmetic sum of the largest possible values ​​of the individual error components can be taken as instrumental (model III).

Arithmetic summation of the largest values ​​of error components is possible if the number of such components is no more than three. In this case, the estimate of the total instrumental error will practically not differ from the statistical summation.

4. CLASSIFICATION OF SIGNALS

A signal is a material carrier of information that represents a certain physical process, one of the parameters of which is functionally related to the physical quantity being measured. This parameter is called informative.

A measuring signal is a signal containing quantitative information about the physical quantity being measured. Basic concepts, terms and definitions in the field of measuring signals are established by GOST 16465 70 "Radio signals. Terms and definitions". The measuring signals are extremely varied. Their classification according to various criteria is shown in Fig. 3.

Based on the nature of measuring informative and time parameters, measuring signals are divided into analog, discrete and digital.

An analog signal is a signal described by a continuous or piecewise continuous function Y a (t), and both this function itself and its argument t can take on any values ​​at given intervals Y<=(Y min ; Y max) и t6(t mjn ; t max)

A discrete signal is a signal that varies discretely in time or in level. In the first case, it can take nT at discrete moments in time, where T = const - sampling interval (period), n = 0; 1; 2;. integer, any values ​​Y JI (nT)e(Y min ; Y max), called samples, or samples. Such signals are described by lattice functions. In the second case, the values ​​of the signal Y a (t) exist at any time te(t niin ; t max), but they can take on a limited range of values ​​h ; =nq, multiples of the quantum q.

Digital signals are level-quantized and time-discrete signals Y u (nT), which are described by quantized lattice functions (quantized sequences), which at discrete moments of time PT accept only a finite series of discrete values ​​of quantization levels h 1, h 2,., h n

According to the nature of changes over time, signals are divided into constants, whose values ​​do not change over time, and variables, whose values ​​change over time. Constant signals are the simplest type of measuring signals.

Variable signals can be continuous in time or pulsed. A signal whose parameters change continuously is called continuous. A pulse signal is a signal of finite energy, significantly different from zero during a limited time interval commensurate with the time of completion of the transient process in the system on which this signal is intended to influence.

According to the degree of availability of a priori information, variable measuring signals are divided into deterministic, quasi-deterministic and random. A deterministic signal is a signal whose law of change is known, and the model does not contain unknown parameters. The instantaneous values ​​of a deterministic signal are known at any time. The signals at the output of the measures are deterministic (with a certain degree of accuracy). For example, the output signal of a low-frequency sine wave generator is characterized by the amplitude and frequency values ​​​​that are set on its controls. The errors in setting these parameters are determined by the metrological characteristics of the generator.

Quasi-deterministic signals are signals with a partially known nature of change over time, i.e. with one or more unknown parameters. They are most interesting from a metrological point of view. The vast majority of measurement signals are quasi-deterministic.

Deterministic and quasi-deterministic signals are divided into elementary, described by simple mathematical formulas, and complex. Elementary signals include constant and harmonic signals, as well as signals described by the unit and delta functions.

Signals can be periodic or non-periodic. Non-periodic signals are divided into almost periodic and transient. Nearly periodic is a signal whose values ​​are approximately repeated when a properly selected number of almost period is added to the time argument. A periodic signal is a special case of such signals. Almost periodic functions are obtained by adding periodic functions with incommensurable periods, for example Y(t) sin(cot) - sin(V2(0t). Transient signals describe transient processes in physical systems.

A signal is called periodic, the instantaneous values ​​of which are repeated at a constant time interval. The period T of the signal is a parameter equal to the smallest such time interval. The frequency f of a periodic signal is the reciprocal of the period.

A periodic signal is characterized by a spectrum. There are three types of spectrum:

* complex complex function of a discrete argument that is a multiple of an integer number of frequency values ​​f of a periodic signal Y(t)

* amplitude - a function of a discrete argument, which is the module of the complex spectrum of a periodic signal

* phase - a function of a discrete argument, which is an argument of the complex spectrum of a periodic signal

A measuring system, by definition, is designed to perceive, process and store measurement information in the general case of heterogeneous physical quantities through various measuring channels (IC). Therefore, calculating the error of a measuring system comes down to estimating the errors of its individual channels.

The resulting relative error of the IR will be equal to

where x is the current value of the measured value;

x P - the limit of a given channel measurement range at which the relative error is minimal;

Relative errors calculated at the beginning and end of the range, respectively.

IR - a chain of various perceiving, converting and recording links

5. Channel development

5.1Development of a channel model

In real data transmission channels, the signal is affected by complex interference and it is almost impossible to give a mathematical description of the received signal. Therefore, when studying signal transmission through channels, idealized models of these channels are used. A data transmission channel model is understood as a description of a channel that allows one to calculate or evaluate its characteristics, on the basis of which one can explore various ways of constructing a communication system without direct experimental data.

The model of a continuous channel is the so-called Gaussian channel. The noise in it is additive and represents an ergodic normal process with zero mathematical expectation. The Gaussian channel reflects quite well only the channel with fluctuation noise. For multiplicative interference, a channel model with a Rayleigh distribution is used. For impulse noise, a channel with a hyperbolic distribution is used.

The discrete channel model coincides with the models of error sources.

A number of mathematical models of error distribution in real communication channels have been put forward, such as Hilbert, Mertz, Maldenbrot, etc.

5.2 Development of a measuring channel model

Previously, measuring equipment was designed and manufactured mainly in the form of separate instruments designed to measure one or several physical quantities. Currently, conducting scientific experiments, automation of complex production processes, control, and diagnostics are unthinkable without the use of measurement information systems (MIS) of various purposes, which make it possible to automatically obtain the necessary information directly from the object being studied, process it and issue it in the required form. Specialized measuring systems are being developed for almost all areas of science and technology.

When designing an IIS according to given technical and operational characteristics, a task arises related to the choice of a rational structure and a set of technical means for its construction. The structure of the information system is mainly determined by the measurement method on which it is based, and the number and type of technical means by the information process occurring in the system. An assessment of the nature of the information process and types of information transformation can be made based on an analysis of the information system information model, but its construction is a rather labor-intensive process, and the model itself is so complex that it makes it difficult to solve the problem.

Due to the fact that in the third generation IMS information processing is carried out mainly by universal computers, which are a structural component of the IMS, and when designing IMS they are selected from a limited number of serial computers, the information model of the IMS can be simplified by reducing it to a model of a measuring channel (MC). ). All measuring channels of the IIS, which include elements of information processes, from receiving information from the object of study or control to its display or processing and storing, contain a certain limited number of types

transformation of information. By combining all types of information conversion in one measuring channel and isolating the latter from the IMS, and also keeping in mind that analog signals are always active at the input of the measuring system, we obtain two models of measuring channels with direct (Fig. 4a) and reverse (Fig. 4b) ) transformations of measurement information.

On the models, in nodes 0 - 4, information is converted. The arrows indicate the direction of information flows, and their letter designations indicate the type of transformation.

Node 0 is the output of the research or control object, on which analog information A is generated, which determines the state of the object. Information A arrives at node 1, where it is converted to the form A n for further transformations in the system. In node 1, conversions of a non-electrical information carrier into an electrical one, amplification, scaling, linearization, etc. can be carried out, i.e., normalization of the parameters of the information carrier A.

In node 2, the normalized information carrier A„ for transmission over the communication line is modulated and provided in the form of an analog A n or discrete D m signal.

Analog information A n in node 3 is demodulated and sent to node 4, where it is measured and displayed.

Fig.4 Model of the measuring channel of direct (a) and reverse (b) transformations of measuring information

Discrete information in node Z 1 is either converted into analog information A n and enters node 4 1, or after digital conversion it is sent to a digital information display device or to a device for processing it.

In some ICs, the normalized information carrier A from node 1 immediately goes to node 4 1 for measurement and display. In other ICs, analog information A, without a normalization operation, immediately enters node 2, where it is sampled.

Thus, the information model (Fig. 4a) has six branches through which information flows are transmitted: analog 0-1-2-3 1 -4 1 and 0-1-4 1 and analog-discrete 0-1-2-3 2 -4 1 , 0-1-2-3 2 -4 2 and 0-2-З 2 -4 1 , 0-2-3 2 -4 2 . Branch 0-l-4 1 is not used when constructing measuring channels of the IMS, but is used only in autonomous measuring instruments and therefore is not shown in Fig. 4a.

The model shown in Fig. 4 b differs from the model in Fig. 4 a only in the presence of branches 3 2 -1"-0, 3 1 -1"-0, 3 2 -1"-1 and 3 1 -1"- 1, through which reverse transmission* of the analog information carrier A n is carried out. In node 1, the output discrete information carrier A l is converted into a homogeneous signal with the input information carrier A or the normalized information carrier A n signal A. Compensation can be made both according to A and A n.

Analysis of the information models of the measuring channels of the IMS showed that when constructing them based on the direct conversion method, only five variants of structures are possible, and when using measurement methods with inverse (compensatory) information conversion 20.

In most cases (especially when constructing an IIS for remote objects), the generalized information model of the IC IIS has the form shown in Fig. 4a. The analogue-discrete branches 0-1-2-3 2 -4 2 and 0-2-3 2 are most widespread. -4 2 . As can be seen, for the indicated branches the number of levels of information conversion into IC does not exceed three.

Since the nodes contain technical means that transform information, taking into account the limited number of transformation levels, they can be combined into three groups. This will allow, when developing an IC IIS, to select the necessary technical means to implement a particular structure. The group of technical means of node 1 includes the entire set of primary measuring transducers, as well as unifying (normalizing) measuring transducers (UMTs) that perform scaling, linearization, power conversion, etc.; blocks of test formation and exemplary measures.

In node 2, if there are analog-discrete branches, there is another group of measuring instruments: analog-to-digital converters (ADC), switches (CM), which serve to connect the corresponding source of information to the IR or processing device, as well as communication channels (CC).

The third group (node ​​3) combines code converters (PCs), digital-to-analog converters (DACs) and delay lines (DLs).

The given IC structure, which implements the direct measurement method, is shown without the switching element and ADC connections that control the operation. It is standard, and most multi-channel IMS are built on its basis, especially long-range IMS.

Of interest are the methods for calculating IC for the various information models discussed above. A strict mathematical calculation is impossible, but using simplified methods of approach to determining the components of the resulting error, parameters and distribution laws, specifying the value of the confidence probability and taking into account the correlations between them, it is possible to create and calculate a simplified mathematical model of a real measuring channel. Examples of calculating the error of channels with analog and digital recorders are considered in the works of P.V. Novitsky.

LITERATURE

1. V. M. Pestrikov Home electrician and more... Ed. Nit. - 4th edition

2. A.G. Sergeev, V.V. Krokhin. Metrology, uch. manual, Moscow, Logos, 2000

3. Goryacheva G. A., Dobromyslov E. R. Capacitors: Handbook. - M.: Radio and communication, 1984

4. Rannev G. G. Methods and measuring instruments: M.: Publishing center "Academy", 2003

5. http//www.biolock.ru

6. Kalashnikov V.I., Nefedov S.V., Putilin A.B. Information-measuring equipment and technologies: textbook. for universities. - M.: Higher. school, 2002

Similar documents

    Description of the operating principle of an analog sensor and selection of its model. Selection and calculation of an operational amplifier. Operating principle and choice of analog-to-digital converter microcircuit. Development of the program algorithm. Description and implementation of the output interface.

    course work, added 02/04/2014

    Preparing an analog signal for digital processing. Calculation of the spectral density of an analog signal. Specifics of digital filter synthesis based on a given analog prototype filter. Calculation and construction of time characteristics of an analog filter.

    course work, added 11/02/2011

    Calculation of filter characteristics in time and frequency domains using fast Fourier transform, output signal in time and frequency domains using inverse fast Fourier transform; determination of the power of the filter's own noise.

    course work, added 10/28/2011

    Development of an analog-to-digital converter adapter and an active low-pass filter. Sampling, quantization, coding as signal conversion processes for the microprocessor section. Algorithm of operation of the device and its electrical circuit.

    abstract, added 01/29/2011

    4:2:2 digital stream parameters. Development of a circuit diagram. Digital-to-analog converter, low-pass filter, analog signal amplifier, output stage, PAL encoder. Development of PCB topology.

    thesis, added 10/19/2015

    An algorithm for calculating a filter in the time and frequency domains using the discrete fast Fourier transform (FFT) and the inverse fast Fourier transform (IFFT). Calculation of the output signal and the inherent noise power of the synthesized filter.

    course work, added 12/26/2011

    Classification of filters according to the type of their amplitude-frequency characteristics. Development of schematic diagrams of functional units. Calculation of an electromagnetic filter for separating electron beams. Determination of the active resistance of the rectifier and diode phase.

    course work, added 12/11/2012

    Development of block diagrams of transmitting and receiving devices of a multi-channel information transmission system with PCM; calculation of basic time and frequency parameters. Project of a pulse amplitude modulator for converting an analog signal into an AIM signal.

    course work, added 07/20/2014

    Typical block diagram of an electronic device and its operation. Properties of a frequency filter, its characteristics. Calculation of the input voltage converter. Design and principle of operation of a relay element. Calculation of an analog time delay element.

    course work, added 12/14/2014

    Consideration of the design of a rheostatic measuring transducer and the principle of its operation. Studying the block diagram of converting an analog signal from a measuring controller into digital form. Study of the operating principle of parallel ADC.

The most important point that characterizes both DACs and ADCs is the fact that their inputs or outputs are digital, which means that the analog signal is sampled in level. Typically an N-bit word is represented as one of 2N possible states, so an N-bit DAC (with a fixed voltage reference) can only have 2N analog signal values, and an ADC can only output 2N different binary code values. Analog signals can be represented in the form of voltage or current.

The resolution of an ADC or DAC can be expressed in several different ways: least significant bit (LSB) weight, ppm FS, millivolts (mV), etc. Different devices (even from the same chip manufacturer) are defined differently, so ADC and DAC users must be able to convert the different characteristics to properly compare devices. Some values ​​of the least significant bit (LSB) are given in Table 1.

Table 1. Quantization: Least Significant Bit (LSB) value

Resolution ability N 2N Full scale voltage 10V ppm FS %FS dB FS
2-bit 4 2.5 V 250000 25 -12
4-bit 16 625 mV 62500 6.25 -24
6-bit 64 156 mV 15625 1.56 -36
8-bit 256 39.1 mV 3906 0.39 -48
10-bit 1024 9.77 mV (10 mV) 977 0.098 -60
12-bit 4096 2.44 mV 244 0.024 -72
14-bit 16384 610 µV 61 0.0061 -84
16-bit 65536 153 µV 15 0.0015 -96
18-bit 262144 38 µV 4 0.0004 -108
20-bit 1048576 9.54 µV (10 µV) 1 0.0001 -120
22-bit 4194304 2.38 µV 0.24 0.000024 -132
24-bit 16777216 596 nV* 0.06 0.000006 -144
*600 nV is in the 10 kHz frequency band, occurring at R = 2.2 kOhm at 25 ° C. Easy to remember: 10-bit quantization at a full scale value of FS = 10 V corresponds to LSB = 10 mV, accuracy 1000 ppm or 0.1%.

All other values ​​can be calculated by multiplying by coefficients equal to powers of 2.

Before looking at the internals of ADCs and DACs, it is necessary to discuss the expected performance and critical parameters of digital-to-analog and analog-to-digital converters. Let's look at the definition of errors and technical requirements for analog-to-digital and digital-to-analog converters. This is very important for understanding the strengths and weaknesses of ADCs and DACs built on different principles.

The first data converters were intended for use in measurement and control applications, where the exact timing of the input signal conversion was usually not important. The data transfer speed in such systems was low. In these devices, the DC characteristics of the A/D and D/A converters are important, but the frame synchronization and AC characteristics are not important.

Figure 1 shows the ideal transfer function of a unipolar, three-bit digital-to-analog converter. In it, both the input and output signals are quantized, so the transfer function graph contains eight separate points. Regardless of how this function is approximated, it is important to remember that the actual transmission characteristic of a digital-to-analog converter is not a continuous line, but a number of discrete points.


Figure 1. Transfer function of an ideal three-bit digital-to-analog converter.

Figure 2 shows the transfer function of a three-bit ideal unsigned analog-to-digital converter. Note that the analog signal at the ADC input is not quantized, but its output is the result of quantizing that signal. The transfer characteristic of an analog-to-digital converter consists of eight horizontal lines, but when analyzing the offset, gain and linearity of the ADC, we will consider the line connecting the midpoints of these segments.



Figure 2. Transfer function of an ideal 3-bit ADC.

In both cases discussed, the full digital scale (all "1s") corresponds to the full analog scale, which coincides with the reference voltage or a voltage dependent on it. Therefore, a digital code represents a normalized relationship between an analog signal and a reference voltage.

The transition of an ideal analog-to-digital converter to the next digital code occurs from a voltage equal to half the least significant digit to a voltage less than half the least significant digit of the full scale voltage. Since the analog signal at the ADC input can take any value, and the output digital signal is a discrete signal, an error occurs between the actual input analog signal and the corresponding value of the output digital signal. This error can reach half the least significant digit. This effect is known as quantization error or transformation uncertainty. In devices using AC signals, this quantization error results in quantization noise.

The examples shown in Figures 1 and 2 show the transient characteristics of unsigned converters operating with a signal of only one polarity. This is the simplest type of converter, but bipolar converters are more useful in real applications.

There are two types of bipolar converters currently in use. The simpler of them is a conventional unipolar converter, the input of which is supplied with an analog signal with a constant component. This component introduces an offset of the input signal by an amount corresponding to the most significant bit (MSB) unit. Many converters can switch this voltage or current to allow the converter to be used in either unipolar or bipolar mode.

Another, more complex type of converter is known as a signed ADC and in addition to N information bits there is an additional bit that shows the sign of the analog signal. Sign analog-to-digital converters are used quite rarely, and are used mainly as part of digital voltmeters.

There are four types of DC errors in ADCs and DACs: offset error, gain error, and two types of linearity errors. The offset and gain errors of ADCs and DACs are similar to those of conventional amplifiers. Figure 3 shows the conversion of bipolar input signals (although the offset error and zero error, which are identical in amplifiers and unipolar ADCs and DACs, are different in bipolar converters and should be taken into account).



Figure 3: Converter Zero Offset Accuracy and Gain Accuracy

The transfer characteristic of both DAC and ADC can be expressed as D = K + GA, where D is a digital code, A is an analog signal, K and G are constants. In a unipolar converter, the coefficient K is equal to zero; in a bipolar converter with a bias, it is equal to one of the most significant digits. The bias error of the converter is the amount by which the actual value of the gain K differs from the ideal value. Gain error is the amount by which the gain G differs from the ideal value.

In general, the gain error can be expressed as the difference between two coefficients, expressed as a percentage. This difference can be considered as the contribution of the gain error (in mV or LSB values) to the total error at the maximum input signal value. Typically the user is given the opportunity to minimize these errors. Note that the amplifier first adjusts the offset when the input signal is zero, and then adjusts the gain when the input signal is close to the maximum value. The algorithm for tuning bipolar converters is more complex.

The integral nonlinearity of the DAC and ADC is similar to the nonlinearity of the amplifier and is defined as the maximum deviation of the actual transmission characteristic of the converter from a straight line. In general, it is expressed as a percentage of full scale (but can be represented in LSB values). There are two general methods for approximating transmission characteristics: the end point method and the best straight line method (see Figure 4).



Figure 4. METHOD FOR MEASURING TOTAL LINEARITY ERROR

When using the end point method, the deviation of an arbitrary characteristic point (after gain correction) from a straight line drawn from the origin is measured. Thus, at Analog Devices, Inc. measure the values ​​of the integral nonlinearity of converters used in measurement and control tasks (since the magnitude of the error depends on the deviation from the ideal characteristic, and not on an arbitrary “best approximation”).

The best straight line method provides a more adequate prediction of distortion in applications dealing with AC signals. It is less sensitive to nonlinearities in technical characteristics. The best fit method draws a straight line through the device's transmission characteristic using standard curve interpolation techniques. After this, the maximum deviation is measured from the constructed straight line. Typically, integral nonlinearity measured in this way accounts for only 50% of the nonlinearity estimated by the end-point method. This makes the method preferable for specifying impressive technical characteristics in a specification, but less useful for analyzing real-world error values. For AC applications, it is better to determine harmonic distortion than DC nonlinearity, so the best straight line method is rarely needed to determine converter nonlinearity.

Another type of converter nonlinearity is differential nonlinearity (DNL). It is associated with the nonlinearity of the code transitions of the converter. Ideally, a change of one unit in the least significant bit of the digital code exactly corresponds to a change of one unit in the least significant bit of the analog signal. In a DAC, changing one least significant bit of the digital code should cause a change in the signal at the analog output exactly corresponding to the value of the least significant bit. At the same time, in an ADC, when moving from one digital level to the next, the value of the signal at the analog input must change exactly by the value corresponding to the least significant digit of the digital scale.

Where the change in the analog signal corresponding to a change in the least significant bit of the digital code is greater or less than this value, we speak of a differential nonlinear (DNL) error. The DNL error of a converter is usually defined as the maximum value of differential nonlinearity detected at any transition.

If the DAC's differential nonlinearity is less than –1 LSB at any transition (see Figure 2.12), the DAC is said to be nonmonotonic, and its transmission response contains one or more local maxima or minima. Differential nonlinearity greater than +1 LSB does not cause monotonicity violation, but is also undesirable. In many DAC applications (especially closed-loop systems where non-monotonicity can change negative feedback to positive feedback), DAC monotonicity is very important. Often the monotonicity of a DAC is explicitly stated in the datasheet, although if the differential nonlinearity is guaranteed to be less than the least significant bit (i.e. |DNL| . 1LSB), the device will be monotonic even if it is not explicitly stated.

It is possible for an ADC to be non-monotonic, but the most common manifestation of DNL in an ADC is missing codes. (see Fig. 2.13). Missing codes (or non-monotonicity) in an ADC are just as undesirable as non-monotonicity in a DAC. Again, this occurs when DNL > 1 LSB.



Figure 5. Non-ideal 3-bit DAC transfer function


Figure 6. Non-ideal 3-bit DAC transfer function

Determining missing codes is more difficult than determining non-monotonicity. All ADCs are characterized by some transition noise, illustrated in Figure 2.14 (think of this noise as the last digit of a digital voltmeter flickering between adjacent values). As resolution increases, the range of the input signal corresponding to the transition noise level can reach or even exceed the signal value corresponding to the least significant one. In this case, especially in combination with a negative DNL error, it may happen that some (or even all) codes will appear where transition noise is present throughout the entire range of input signal values. Thus, there may be some codes for which there is no input signal value at which that code is guaranteed to appear in the output, although there may be some range of input signal at which the code will sometimes appear.



Figure 7. Combined effects of code transition noise and differential nonlinearity (DNL)

For a low-resolution ADC, the no-missing code condition can be defined as a combination of transition noise and differential nonlinearity that would guarantee some level (say 0.2 LSB) of noise-free code for all codes. However, it is not possible to achieve the high resolution of today's sigma-delta ADCs, or even the lower resolution of a wide-bandwidth ADC. In these cases, the manufacturer must determine noise levels and resolution in some other way. It is not so important which method is used, but the specification should clearly define the method used and the expected characteristics.

Literature:

  1. Analod-Digital Conversion, Walt Kester editor, Analog Devices, 2004. - 1138 p.
  2. Mixed-Signal and DSP Design Techniques ISBN_0750676116, Walt Kester editor, Analog Devices, 2004. - 424 p.
  3. High Speed ​​System Application, Walt Kester editor, Analog Devices, 2006. - 360 p.

Together with the article “Static transfer characteristics of ADCs and DACs” read:

With a sequential increase in the values ​​of the input digital signal D(t) from 0 to 2N-1 through the least significant unit (EMP), the output signal U out (t) forms a stepped curve. This dependence is usually called the DAC conversion characteristic. In the absence of hardware errors, the midpoints of the steps are located on the ideal straight line 1 (Fig. 22), which corresponds to the ideal transformation characteristic. The actual transformation characteristic may differ significantly from the ideal one in the size and shape of the steps, as well as their location on the coordinate plane. There are a number of parameters to quantify these differences.

Static parameters

Resolution- increment Uout when converting adjacent values ​​Dj, i.e. different on the EMR. This increment is the quantization step. For binary conversion codes, the nominal value of the quantization step is h=U psh /(2N-1), where U psh is the nominal maximum output voltage of the DAC (full scale voltage), N is the bit capacity of the DAC. The higher the bit capacity of the converter, the higher its resolution.

Full scale error- the relative difference between the real and ideal values ​​of the conversion scale limit in the absence of a zero offset.

It is the multiplicative component of the total error. Sometimes indicated by the corresponding EMP number.

Zero offset error- the value of Uout when the DAC input code is zero. It is an additive component of the total error. Typically stated in millivolts or as a percentage of full scale:

Nonlinearity- maximum deviation of the actual conversion characteristic U out (D) from the optimal one (line 2 in Fig. 22). The optimal characteristic is found empirically so as to minimize the value of the nonlinearity error. Nonlinearity is usually defined in relative units, but in the reference data it is also given in the EMP. For the characteristics shown in Fig. 22.

Differential nonlinearity is the maximum change (taking into account the sign) of the deviation of the real transformation characteristic Uout(D) from the optimal one when moving from one input code value to another adjacent value. Usually defined in relative units or in EMR. For the characteristics shown in Fig. 22,

The monotonicity of the conversion characteristic is an increase (decrease) in the DAC output voltage Uout with an increase (decrease) in the input code D. If the differential nonlinearity is greater than the relative quantization step h/Upsh, then the converter characteristic is non-monotonic.

The temperature instability of a DA converter is characterized by the temperature coefficients of full scale error and zero offset error.

Full scale and zero offset errors can be corrected by calibration (tuning). Nonlinearity errors cannot be eliminated by simple means.

Dynamic parameters

The dynamic parameters of the DAC are determined by the change in the output signal when the input code changes abruptly, usually from the value “all zeros” to “all ones” (Fig. 23).

Settling time- time interval from the moment the input code changes (in Fig. 23 t=0) until the last time the equality is satisfied

|U out - U psh |= d/2,

Slew rate- maximum rate of change of Uout(t) during the transient process. It is defined as the ratio of the increment D Uout to the time t during which this increment occurred. Usually specified in the technical specifications of a DAC with a voltage output signal. For a DAC with a current output, this parameter largely depends on the type of output op-amp.

For multiplying DACs with voltage output, the unity gain frequency and power bandwidth are often specified, which are mainly determined by the properties of the output amplifier.

DAC noise

Noise at the DAC output can appear for various reasons caused by physical processes occurring in semiconductor devices. To assess the quality of a high-resolution DAC, it is customary to use the concept of root mean square noise. They are usually measured in nV/(Hz) 1/2 in a given frequency band.

Surges (pulse noise) are sharp short spikes or dips in the output voltage that occur during a change in output code values ​​due to the non-synchronism of opening and closing analog switches in different bits of the DAC. For example, if, when moving from the code value 011...111 to the value 100...000, the key of the most significant digit of the D-A converter with the summation of weight currents opens later than the keys of the lower digits close, then a signal will exist at the DAC output for some time, corresponding to code 000...000.

Overshoot is typical for high-speed DACs, where capacitances that could smooth them out are minimized. A radical way to suppress emissions is to use sample-and-hold devices. Emissions are assessed by their area (in pV*s).

In table 2 shows the most important characteristics of some types of digital-to-analog converters.

table 2

DAC name Digit capacity, bit Number of channels Output type Setup time, µs Interface Internal ION Voltage power supply, V Power consumption mW Note
Wide range of DACs
572PA1 10 1 I 5 - No 5; 15 30 On MOS switches, multiplying
10 1 U 25 Last Eat 5 or +/-5 2
594PA1 12 1 I 3,5 - No +5, -15 600 On current switches
MAX527 12 4 U 3 Parall. No +/-5 110 Loading input words via 8-pin bus
DAC8512 12 1 U 16 Last Eat 5 5
14 8 U 20 Parall. No 5; +/-15 420 On MOS switches, with an inverse resistive matrix
8 16 U 2 Parall. No 5 or +/-5 120 On MOS switches, with an inverse resistive matrix
8 4 - 2 Last No 5 0,028 Digital potentiometer
Micropower DACs
10 1 U 25 Last No 5 0,7 Multiplying, in an 8-pin package
12 1 U 25 Parall. Eat 5 or +/-5 0,75 Multiplying, consumption - 0.2 mW in economy mode
MAX550V 8 1 U 4 Last No 2,5:5 0,2 Consumption 5 µW in economy mode
12 1 U 60 Last No 2,7:5 0,5 Multiplying, SPI-compatible interface
12 1 I 0,6 Last No 5 0,025 Multiplying
12 1 U 10 Last No 5 or 3 0.75 (5 h)
0.36 (3 h)
6-pin package, consumption 0.15 μW in economy mode. I 2 C-compatible interface
Precision DACs

Name: High-speed integrated circuits DAC and ADC and measurement of their parameters.

The features of construction circuits, parameters and electrical characteristics of high-speed integrated digital-to-analog and analog-to-digital converters with signal conversion speeds from 10 to the 7th power to 10 to the ninth power bits per second are considered. The methods and principles of constructing meters for the static and dynamic parameters of converters are described, and specific types of measuring equipment intended for monitoring and measuring their parameters are given. For engineering and technical workers specializing in the development and application of digital-to-analog and analog-to-digital converters, as well as equipment for measuring and monitoring their electrical parameters.


When developing and manufacturing DAC and ADC microcircuits, it is necessary to take into account a large range of incoming components and increased requirements for their electrical parameters in terms of accuracy and temperature stability compared to digital microcircuits; irregularity of the structure and the presence in it of nodes that perform linear and nonlinear signal processing functions (bit switches, amplifiers, comparators, reference voltage sources, resistor matrices, control and storage circuits). Many technological problems arise that are associated with meeting the requirements for accuracy and control of the geometric dimensions of multilayer microstructures formed on a silicon wafer.

Download the e-book for free in a convenient format, watch and read:
Download the book High-speed integrated circuits DAC and ADC and measurement of their parameters - Marcinkevičius A.-J.K. - fileskachat.com, fast and free download.

Download djvu
Below you can buy this book at the best price with a discount with delivery throughout Russia.