CCD and CMOS sensors for digital photo and video cameras

The sensor is the main element of a digital camera

The heart of any digital video or photo camera (the boundaries between these types of devices are now gradually blurring) is a light-sensitive sensor.

It converts visible light into electrical signals that are used for further processing by electronic circuits. From the school physics course we know that light can be considered as a stream of elementary particles - photons. Photons hitting the surface of some semiconductor materials can lead to the formation of electrons and holes (recall that a hole in semiconductors is usually called a vacant place for an electron, formed as a result of the rupture of covalent bonds between the atoms of a semiconductor substance). The process of generating electron-hole pairs under the influence of light is possible only in the case when the photon energy is sufficient to “tear off” an electron from the “native” nucleus and transfer it to the conduction band. The energy of a photon is directly related to the wavelength of the incident light, that is, it depends on the so-called color of the radiation. In the range of visible (that is, perceived by the human eye) radiation, photon energy is sufficient to generate the generation of electron-hole pairs in semiconductor materials such as, for example, silicon.

Since the number of photoelectrons produced is directly proportional to the intensity of the light flux, it becomes possible to mathematically relate the amount of incident light to the amount of charge it generates. It is on this simple physical phenomenon that the operating principle of photosensitive sensors is based. The sensor performs five basic operations: absorbs photons, converts them into charge, stores it, transmits it, and converts it into voltage. Depending on the manufacturing technology, different sensors perform the tasks of storing and accumulating photoelectrons in different ways.

Historically, so-called CCD matrices were the first to be used as photosensitive elements for video cameras, mass production of which began in 1973. The abbreviation CCD stands for charge-coupled device; in English literature the term CCD (Charge-Coupled Device) is used.

As already noted, under the influence of light, electron-hole pairs are formed in a semiconductor. However, along with the generation process, the reverse process also occurs - the recombination of holes and electrons. Therefore, steps must be taken to separate the resulting electrons and holes and store them for the required time. After all, it is the number of photoelectrons formed that carries information about the intensity of absorbed light. This is what the gate and the insulating dielectric layer are designed for. Let's assume that a positive potential is applied to the gate. In this case, under the influence of the created electric field penetrating through the dielectric into the semiconductor, the holes, which are the main charge carriers, will begin to shift away from the dielectric, that is, into the depth of the semiconductor. At the interface of the semiconductor with the dielectric, a region depleted of majority carriers, that is, holes, is formed, and the size of this region depends on the magnitude of the applied potential. It is this depleted region that is the “storage” for photoelectrons. Indeed, if a semiconductor is exposed to light, then the resulting electrons and holes will move in opposite directions - holes into the depth of the semiconductor, and electrons towards the depletion layer. Since there are no holes in this layer, the electrons will remain there without recombination process for the required time.

Naturally, the process of electron accumulation cannot continue indefinitely.

Let's imagine not one, but several closely spaced gates on the surface of the same dielectric (Fig. 2). Let electrons accumulate under one of the gates as a result of photogeneration. If a higher positive potential is applied to the adjacent gate, then electrons will begin to flow into the region of a stronger field, that is, move from one gate to another.

It should now be clear that if we have a chain of gates, then by applying appropriate control voltages to them, we can move a localized charge packet along such a structure. It is on this simple principle that charge-coupled devices are based.

A remarkable property of CCDs is that to move the accumulated charge, only three types of gates are sufficient - one transmitting, one receiving and one isolating, separating pairs of receiving and transmitting from each other, and the gates of the same name of such triplets can be connected to each other into a single clock bus requiring only one external pin (Fig. 3). This is the simplest three-phase shift register on a CCD.

The structure of the CCD matrix we considered is called a CCD with a surface transmission channel, since the channel through which the accumulated charge is transmitted is located on the surface of the semiconductor. The surface transmission method has a number of significant disadvantages associated with the properties of the semiconductor boundary. The fact is that the limitation of a semiconductor in space violates the ideal symmetry of its crystal lattice with all the ensuing consequences. Without delving into the intricacies of solid state physics, we note that such a limitation leads to the formation of energy traps for electrons. As a result, electrons accumulated under the influence of light can be captured by these traps instead of being transferred from one gate to another. Among other things, such traps can release electrons unpredictably, and not always when they are really needed. It turns out that the semiconductor begins to “make noise” - in other words, the number of electrons accumulated under the gate will not exactly correspond to the intensity of the absorbed radiation.

It is possible to avoid such phenomena, but to do this, the transfer channel itself must be moved deeper into the conductor. This solution was implemented by Philips specialists in 1972. The idea was that in the surface region of the p-type semiconductor a thin layer of n-type semiconductor was created, that is, a semiconductor in which the main charge carriers are electrons (Fig. 5).

It is well known that contact of two semiconductors with different types of conductivity leads to the formation of a depletion layer at the junction boundary. This happens due to the diffusion of holes and electrons in mutually opposite directions and their recombination. Applying a positive potential to the gate increases the size of the depletion region. It is characteristic that now the depletion region itself, or the capacity for photoelectrons, is not on the surface, and therefore there are no surface traps for electrons. Such a transfer channel is called hidden, and all modern CCDs are manufactured with a hidden transfer channel.

In a matrix with frame-by-frame transfer, there are two equivalent sections with the same number of rows: accumulation and storage. Each row in these sections is formed by three gates (transmitting, receiving and isolating). In addition, as noted above, all lines are separated by many stop channels that form accumulation cells in the horizontal direction. Thus, the smallest structural element of a CCD matrix (pixel) is created from three horizontal gates and two vertical stop channels (Fig. 6).

During the exposure, photoelectrons are formed in the accumulation section. After this, clock pulses applied to the gates transfer the accumulated charges from the accumulation section to the shaded storage section, that is, the entire frame is actually transferred. Therefore, this architecture is called frame-transfer CCD. After the transfer, the storage section is cleared and can re-accumulate charges, while from the memory section charges flow into the horizontal read register. The structure of the horizontal register is similar to the structure of the CCD sensor - the same three gates for charge transfer. Each element of the horizontal register has a charge connection with the corresponding column of the memory section, and for each clock pulse from the accumulation section, the entire row enters the read register, which is then transferred to the output amplifier for further processing.

The considered CCD matrix circuit has one undoubted advantage - a high fill factor. This term is usually used to refer to the ratio of the photosensitive area of ​​the matrix to its total area. For matrices with frame-by-frame transfer, the fill factor reaches almost 100%. This feature makes it possible to create very sensitive devices based on them.

In addition to the considered advantages, matrices with frame-by-frame transfer also have a number of disadvantages. First of all, we note that the transfer process itself cannot be carried out instantly. It is this circumstance that leads to a number of negative phenomena. During the process of charge transfer from the accumulation section to the storage section, the first remains illuminated and the process of photoelectron accumulation continues in it. This leads to the fact that the bright areas of the image have time to contribute to the foreign charge packet even during the short time during which it passes through them. As a result, characteristic distortions appear in the frame in the form of vertical stripes extending across the entire frame from the bright areas of the image. Of course, various tricks can be used to combat such phenomena, but the most radical method is to separate the accumulation section and the transfer section so that the transfer occurs in a shaded area. Matrices of this architecture are called CCDs with interline transfer (Fig. 7).

Unlike the frame-by-frame transfer matrix described earlier, photodiodes act as charge storage elements here (photodiodes will be discussed in more detail later). The charges accumulated by the photodiodes are transferred to the shaded CCD elements, which carry out further charge transfer. Please note that the transfer of the entire frame from the photodiodes to the vertical CCD transfer registers occurs in one clock cycle. A natural question arises: why did this architecture get the name interline hyphen (the term “interlaced hyphen” is also used)?

If we recall the architecture of a CCD matrix with interframe transfer, it becomes clear that the transfer of a frame from the accumulation section to the storage section occurs during the interframe gap of the video signal. This is understandable, since transferring the entire frame will require a significant amount of time. In an interline transfer architecture, frame transmission occurs in one clock cycle, and a short period of time is sufficient for this. Next, the image enters the horizontal shift register, and transmission occurs line by line during the interline intervals of the video signal.

In addition to the two types of CCD matrices discussed, there are other schemes. For example, a scheme that combines interframe and interline mechanisms (line-frame transfer) is obtained by adding a storage section to the interline transfer CCD matrix.

In this case, the frame transfer from the photosensitive elements occurs in one clock cycle during the interline interval, and during the interframe interval the frame is transferred to the storage section (interframe transfer); From the storage section, the frame is transferred to the horizontal shift register during line spacing (interframe transfer).

Recently, so-called super-CCDs (Super CCDs) have become widespread, using an original cellular architecture formed by octagonal pixels. Due to this, the working surface of the silicon increases and the pixel density (the number of CCD pixels) increases. In addition, the octagonal shape of the pixels increases the area of ​​the light-sensitive surface.

CMOS sensors

A fundamentally different type of sensor is the so-called CMOS sensor (CMOS - complementary metal-oxide-semiconductor; in English terminology - CMOS).

The simplest photodiode is a contact between n- and p-type semiconductors.

At the interface of these semiconductors, a depletion region is formed, that is, a layer without holes and electrons. Such a region is formed as a result of the diffusion of the main charge carriers in mutually opposite directions. Holes move from the p-semiconductor (that is, from the region where there is an excess of them) to the n-semiconductor (that is, to the region where their concentration is low), and electrons move in the opposite direction, that is, from the n-semiconductor to the p-semiconductor. semiconductor. As a result of this recombination, holes and electrons disappear and a depletion region is created. In addition, impurity ions are exposed at the boundaries of the depleted region, and in the n-region the impurity ions have a positive charge, and in the p-region they have a negative charge. These charges, distributed along the boundary of the depletion region, form an electric field similar to that created in a parallel-plate capacitor consisting of two plates. It is this field that performs the function of spatial separation of holes and electrons formed during photogeneration. The presence of such a local field (also called a potential barrier) is a fundamental point in any photosensitive sensor (not only in a photodiode).

The main difference between CMOS sensors and CCD sensors is not in the method of accumulating charge, but in the method of its further transfer. CMOS technology, unlike CCD, allows for a greater number of operations directly on the chip on which the photosensitive matrix is ​​located. In addition to releasing electrons and transmitting them, CMOS sensors can also process images, highlight image edges, reduce noise, and perform analog-to-digital conversions.

Moreover, it is possible to create programmable CMOS sensors, hence a very flexible multifunctional device can be obtained.

Such a wide range of functions performed by a single chip is the main advantage of CMOS technology over CCD. This reduces the number of required external components. Using a CMOS sensor in a digital camera allows you to install other chips in the free space - for example, digital signal processors (DSP) and analog-to-digital converters.

The rapid development of CMOS technologies began in 1993, when active pixel sensors were created. With this technology, each pixel has its own readout transistor amplifier, which allows the charge to be converted into voltage directly at the pixel. In addition, it became possible for random access to each pixel of the sensor (similar to how random access memory works). The charge is read from the active pixels of the CMOS sensor using a parallel circuit (Fig. 9), which allows you to read the signal from each pixel or from a column of pixels directly. Random access allows the CMOS sensor to read not only the entire matrix, but also selected areas (windowed reading method).

Despite the apparent advantages of CMOS matrices over CCDs (the main one being lower price), they also have a number of disadvantages. The presence of additional circuits on the CMOS matrix chip leads to the appearance of a number of noises, such as transistor and diode scattering, as well as the effect of residual charge, that is, CMOS matrices today are noisier. Therefore, in the near future, professional digital cameras will use high-quality CCD matrices, and CMOS sensors are entering the market of cheaper devices, which, in particular, include Web cameras.

The photosensitive sensors discussed above are capable of responding only to the intensity of absorbed light - the higher the intensity, the greater the charge accumulates.

A natural question arises: how is a color image obtained?

To enable the camera to distinguish colors, an array of color filters (CFA, color filter arrays) is applied directly to the active pixel. The principle of a color filter is very simple: it only allows light of a certain color to pass through (in other words, only light with a certain wavelength). But how many such filters will be needed if the number of different color shades is practically unlimited? It turns out that any color shade can be obtained by mixing several primary (base) colors in certain proportions. In the most popular additive model, RGB (Red, Green, Blue), there are three such colors: red, green and blue. This means that only three color filters are required. Note that the RGB color model is not the only one, but the vast majority of digital Web cameras use it.

The most popular are Bayer pattern filter arrays. In this system, red, green and blue filters are staggered, and the number of green filters is twice as many as red or blue.

The arrangement is such that the red and blue filters are located between the green ones (Fig. 10).

As already noted, the RGB color model uses three primary colors, with which you can obtain any shade of the visible spectrum. How many shades can digital cameras distinguish? The maximum number of different color shades is determined by the color depth, which in turn is determined by the number of bits used to encode the color. The popular RGB 24 model, with a color depth of 24 bits, allocates 8 bits for each color. With 8 bits, 256 different colors can be specified for red, green and blue respectively. Each hue is assigned a value from 0 to 255. For example, the color red can take 256 gradations: from pure red (255) to black (0). The maximum code value corresponds to a pure color, and the code for each color is usually placed in the following order: red, green and blue.

For example, the code for pure red is written as (255, 0, 0), the code for green is (0, 255, 0), and the code for blue is (0, 0, 255). Yellow can be obtained by mixing red and green, and its code is written as (255, 255, 0).

In addition to the RGB model, the YUV and YСrCb models, which are similar to each other and are based on the separation of brightness and color signals, have also found widespread use.

How digital ones work

Web cameras

The operating principle of all types of digital cameras is approximately the same. Let's consider a typical diagram of the simplest Web camera, the main difference of which from other types of cameras is the presence of a USB interface for connecting to a computer.

In addition to the optical system (lens) and the photosensitive CCD or CMOS sensor, it is necessary to have an analog-to-digital converter (ADC), which converts the analog signals of the photosensitive sensor into a digital code.

In addition, a system for forming a color image is also necessary. Another important element of the camera is the circuit responsible for data compression and preparation for transmission in the required format. For example, in the web camera in question, video data is transmitted to the computer via a USB interface, so there must be a USB interface controller at its output. The block diagram of a digital camera is shown in Fig.

eleven .

An analog-to-digital converter is designed to sample a continuous analog signal and is characterized by a sampling frequency that determines the time intervals at which the analog signal is measured, as well as its bit depth. The ADC width is the number of bits used to represent each signal sample. For example, if an 8-bit ADC is used, then 8 bits are used to represent the signal, which allows 256 gradations of the original signal to be distinguished. When using a 10-bit ADC, it is possible to distinguish between 1024 different gradations of an analog signal.

As can be seen from the block diagram, the color formation block (analog signal processor) has two channels - RGB and YСrCb, and for the YСrCb model the brightness and color difference signals are calculated using the formulas:

Y = 0.59G + 0.31R + 0.11B,

Cr = 0.713 × (R – Y),

Cb = 0.564 × (B – Y).

The analog RGB and YCrCb signals generated by the analog signal processor are processed by two 10-bit ADCs, each operating at 13.5 MSPS, providing pixel-speed synchronization. Once digitized, the data is sent to a digital converter that produces video data in 16-bit YUV 4:2:2 or 8-bit Y 4:0:0 format, which is sent to the output port via a 16-bit or 8-bit bus.

In addition, the CMOS sensor in question has a wide range of image correction capabilities: white balance, exposure control, gamma correction, color correction, etc. are provided. You can control the operation of the sensor via the SCCB (Serial Camera Control Bus) interface.

OV511+ microcircuit, the block diagram of which is shown in Fig. 13, is a USB controller.

The controller allows you to transfer video data via a USB bus at speeds of up to 7.5 Mbit/s. It is easy to calculate that such a bandwidth will not allow transmitting a video stream at an acceptable speed without preliminary compression. Actually, compression is the main purpose of the USB controller. Providing the necessary compression in real time up to a compression ratio of 8:1, the controller allows you to transmit a video stream at a speed of 10-15 frames per second at a resolution of 640x480 and at a speed of 30 frames per second at a resolution of 320x240 and lower.

The OmniCE block, which implements a proprietary compression algorithm, is responsible for data compression.

OmniCE provides not only the required video stream speed, but also fast decompression with minimal CPU load (at least according to the developers). The compression ratio provided by the OmniCE block varies from 4 to 8 depending on the required video stream speed.

ComputerPress 12"2001

What is a CCD?

A little history
In the late 60s and early 70s, so-called “Charge Coupled Devices”, abbreviated as CCDs, began to be developed. In English it looks like "charge-coupled devices" or abbreviated as CCD. The principle behind CCD matrices was the fact that silicon is capable of responding to visible light. And this fact led to the idea that this principle can be used to obtain images of luminous objects.

Astronomers were among the first to recognize the extraordinary capabilities of CCDs for image recording. In 1972, a group of researchers from JPL (Jet Propulsion Laboratory, USA) founded a program to develop CCDs for astronomy and space research. Three years later, together with scientists at the University of Arizona, the team obtained the first astronomical CCD image. A near-infrared image of Uranus using a one and a half meter telescope revealed dark spots near the planet's south pole, indicating the presence of methane...

The use of CCD matrices today has found wide application: digital cameras, video cameras; It has become possible to integrate a CCD matrix like a camera even into mobile phones.

CCD device

A typical CCD device (Fig. 1): on the semiconductor surface there is a thin (0.1-0.15 μm) layer of dielectric (usually oxide), on which strips of conducting electrodes (made of metal or polycrystalline silicon) are located. These electrodes form a linear or matrix regular system, and the distances between the electrodes are so small that the effects of mutual influence of neighboring electrodes are significant. The operating principle of CCDs is based on the emergence, storage and directional transmission of charge packets in potential wells formed in the near-surface layer of a semiconductor when external electrical voltages are applied to the electrodes.



Rice. 1. Basic design of a CCD matrix.

In Fig. 1, the symbols C1, C2 and C3 indicate MOS capacitors (metal-oxide-semiconductor).

If a positive voltage U is applied to any electrode, then an electric field arises in the MIS structure, under the influence of which the majority carriers (holes) very quickly (in a few picoseconds) move away from the surface of the semiconductor. As a result, a depleted layer is formed at the surface, the thickness of which is fractions or units of a micrometer.
Minority carriers (electrons) generated in the depletion layer under the influence of some processes (for example, thermal) or getting there from the neutral regions of the semiconductor under the influence of diffusion will move (under the influence of the field) to the semiconductor-insulator interface and be localized in a narrow inverse layer. Thus, a potential well for electrons appears at the surface, into which they roll from the depletion layer under the influence of the field. The majority carriers (holes) generated in the depletion layer are ejected into the neutral part of the semiconductor under the influence of the field.

Over a given period of time, each pixel is gradually filled with electrons in proportion to the amount of light entering it. At the end of this time, the electrical charges accumulated by each pixel are transferred in turn to the “output” of the device and measured.

The size of the photosensitive pixel of the matrices ranges from one or two to several tens of microns. The size of silver halide crystals in the photosensitive layer of photographic film ranges from 0.1 (positive emulsions) to 1 micron (highly sensitive negative).

One of the main parameters of the matrix is ​​the so-called quantum efficiency. This name reflects the efficiency of converting absorbed photons (quanta) into photoelectrons and is similar to the photographic concept of photosensitivity. Since the energy of light quanta depends on their color (wavelength), it is impossible to unambiguously determine how many electrons will be born in a matrix pixel when it absorbs, for example, a flux of one hundred heterogeneous photons. Therefore, the quantum efficiency is usually given in the data sheet for the matrix as a function of the wavelength, and in certain parts of the spectrum it can reach 80%. This is much more than that of photographic emulsion or eye (about 1%).

What types of CCD matrices are there?

The CCD array had a wide range of applications in the 80s and 90s for astronomical observations. It was enough to move the image along the CCD line and it appeared on the computer monitor. But this process was accompanied by many difficulties and therefore, at present, CCD arrays are increasingly being replaced by CCD matrices.

Undesirable effects

One undesirable side effect of charge transfer on a CCD that can interfere with observations is bright vertical stripes (pillars) in place of bright areas of a small area image. Possible undesirable effects of CCD matrices also include: high dark noise, the presence of “blind” or “hot” pixels, uneven sensitivity across the matrix field. To reduce dark noise, autonomous cooling of CCD matrices is used to temperatures of -20°C and below. Or a dark frame is taken (for example, with the lens closed) with the same duration (exposure) and temperature as the previous frame was taken. Subsequently, a special program on a computer subtracts the dark frame from the image.

The good thing about CCD-based television cameras is that they can capture images at up to 25 frames per second with a resolution of 752 x 582 pixels. But the unsuitability of some cameras of this type for astronomical observations lies in the fact that in them the manufacturer implements internal image preprocessing (read: distortion) for better perception of the resulting frames by vision. This includes AGC (automated control adjustment) and the so-called. effect of “sharp boundaries” and others.

Progress…

In general, the use of CCD receivers is much more convenient than the use of non-digital light receivers, since the received data is immediately in a form suitable for processing on a computer and, in addition, the speed of obtaining individual frames is very high (from several frames per second to minutes).

Currently, the production of CCD matrices is rapidly developing and improving. The number of “megapixels” of matrices increases - the number of individual pixels per unit area of ​​​​the matrix. The quality of images obtained using CCD matrices, etc., improves.

Used sources:
1. 1. Victor Belov. Accurate to tenths of a micron.
2. 2. S.E. Guryanov. Meet CCD.

General information about CCD matrices.

Currently, most image capture systems use CCD (charge-coupled device) matrices as the photosensitive device.

The operating principle of a CCD matrix is ​​as follows: a matrix of photosensitive elements (accumulation section) is created on the basis of silicon. Each photosensitive element has the property of accumulating charges proportional to the number of photons hitting it. Thus, over some time (exposure time) in the accumulation section, a two-dimensional matrix of charges proportional to the brightness of the original image is obtained. The accumulated charges are initially transferred to the storage section, and then line by line and pixel by pixel to the output of the matrix.

The size of the storage section in relation to the accumulation section varies:

  • per frame (matrices with frame transfer for progressive scan);
  • per half-frame (matrices with frame transfer for interlaced scanning);

There are also matrices in which there is no storage section, and then line transfer is carried out directly through the accumulation section. Obviously, for such matrices to work, an optical shutter is required.

The quality of modern CCD matrices is such that the charge remains virtually unchanged during the transfer process.

Despite the apparent variety of television cameras, the CCD matrices used in them are practically the same, since mass and large-scale production of CCD matrices is carried out by only a few companies. These are SONY, Panasonic, Samsung, Philips, Hitachi Kodak.

The main parameters of CCD matrices are:

  • dimension in pixels;
  • physical size in inches (2/3, 1/2, 1/3, etc.). Moreover, the numbers themselves do not determine the exact size of the sensitive area, but rather determine the class of the device;
  • sensitivity.

Resolution of CCD cameras.

The resolution of CCD cameras is mainly determined by the size of the CCD matrix in pixels and the quality of the lens. To some extent, this can be influenced by the camera’s electronics (if it’s poorly made, it can worsen the resolution, but they rarely do anything frankly bad these days).

It is important to make one note here. In some cases, high-frequency spatial filters are installed in cameras to improve apparent resolution. In this case, an image of an object obtained from a smaller camera may appear even sharper than an image of the same object obtained objectively from a better camera. Of course, this is acceptable when the camera is used in visual surveillance systems, but it is completely unsuitable for constructing measurement systems.

Resolution and format of CCD matrices.

Currently, various companies produce CCD matrices covering a wide range of dimensions from several hundred to several thousand. This is how a matrix with a dimension of 10000x10000 was reported, and this message noted not so much the problem of the cost of this matrix as the problem of storing, processing and transmitting the resulting images. As we know, matrices with dimensions up to 2000x2000 are now more or less widely used.

The most widely, or more precisely, mass-used CCD matrices certainly include matrices with a resolution oriented to the television standard. These are matrices mainly of two formats:

  • 512*576;
  • 768*576.
512*576 matrices are usually used in simple and cheap video surveillance systems.

Matrices 768*576 (sometimes a little more, sometimes a little less) allow you to get the maximum resolution for a standard television signal. At the same time, unlike matrices of the 512*576 format, they have a grid arrangement of photosensitive elements close to a square, and, therefore, equal horizontal and vertical resolution.

Often, camera manufacturers indicate resolution in television lines. This means that the camera allows you to see N/2 dark vertical strokes on a light background, arranged in a square inscribed in the image field, where N is the declared number of television lines. In relation to a standard television table, this assumes the following: by selecting the distance and focusing the table image, it is necessary to ensure that the upper and lower edges of the table image on the monitor coincide with the outer contour of the table, marked by the vertices of black and white prisms; then, after final subfocusing, the number is read in the place of the vertical wedge where the vertical strokes for the first time cease to be resolved. The last remark is very important because... and in the image of test fields of a table with 600 or more lines, alternating stripes are often visible, which, in fact, are moiré formed by the beating of the spatial frequencies of the lines of the table and the grid of sensitive elements of the CCD matrix. This effect is especially pronounced in cameras with high-frequency spatial filters (see above)!

I would like to note that, all other things being equal (this can mainly be influenced by the lens), the resolution of black-and-white cameras is uniquely determined by the size of the CCD matrix. So a 768*576 format camera will have a resolution of 576 television lines, although in some prospectuses you can find a value of 550, and in others 600.

Lens.

The physical size of the CCD cells is the main parameter that determines the requirement for the resolution of the lens. Another such parameter may be the requirement to ensure the operation of the matrix under light overload conditions, which will be discussed below.

For a 1/2 inch SONY ICX039 matrix, the pixel size is 8.6µm*8.3µm. Therefore, the lens must have a resolution better than:

1/8.3*10e-3= 120 lines (60 pairs of lines per millimeter).

For lenses made for 1/3-inch matrices, this value should be even higher, although this, oddly enough, does not affect the cost and such a parameter as aperture, since these lenses are made taking into account the need to form an image on a smaller light-sensitive field of the matrix. It also follows that lenses for smaller matrices are not suitable for large matrices due to significantly deteriorating characteristics at the edges of large matrices. At the same time, lenses for large sensors can limit the resolution of images obtained from smaller sensors.

Unfortunately, with all the modern abundance of lenses for television cameras, it is very difficult to obtain information on their resolution.

In general, we do not often select lenses, since almost all of our Customers install video systems on existing optics: microscopes, telescopes, etc., so our information about the lens market is in the nature of notes. We can only say that the resolution of simple and cheap lenses is in the range of 50-60 pairs of lines per mm, which is generally not enough.

On the other hand, we have information that special lenses produced by Zeiss with a resolution of 100-120 line pairs per mm cost more than $1000.

So, when purchasing a lens, it is necessary to conduct preliminary testing. I must say that most Moscow sellers provide lenses for testing. Here it is once again appropriate to recall the moire effect, the presence of which, as noted above, can mislead regarding the resolution of the matrix. So, the presence of moire in the image of sections of the table with strokes above 600 television lines in relation to the lens indicates a certain reserve of the latter’s resolution, which, of course, does not hurt.

One more, perhaps important note for those who are interested in geometric measurements. All lenses have distortion to one degree or another (pincushion-shaped distortion of the image geometry), and the shorter the lens, the greater these distortions, as a rule, are. In our opinion, lenses with focal lengths greater than 8-12 mm have acceptable distortion for 1/3" and 1/2" cameras. Although the level of “acceptability”, of course, depends on the tasks that the television camera must solve.

Resolution of image input controllers

The resolution of image input controllers should be understood as the conversion frequency of the analog-to-digital converter (ADC) of the controller, the data of which is then recorded in the controller’s memory. Obviously, there is a reasonable limit to increasing the digitization frequency. For devices with a continuous structure of the photosensitive layer, for example, vidicons, the optimal digitization frequency is equal to twice the upper frequency of the useful signal of the vidicon.

Unlike such light detectors, CCD matrices have a discrete topology, so the optimal digitization frequency for them is determined as the shift frequency of the output register of the matrix. In this case, it is important that the controller’s ADC operates synchronously with the output register of the CCD matrix. Only in this case can the best conversion quality be achieved both from the point of view of ensuring a “rigid” geometry of the resulting images and from the point of view of minimizing noise from clock pulses and transient processes.

Sensitivity of CCD cameras

Since 1994, we have been using SONY card cameras in our devices based on the ICX039 CCD matrix. The SONY description for this device indicates a sensitivity of 0.25 lux on an object with a lens aperture of 1.4. Several times already, we have come across cameras with similar parameters (size 1/2 inch, resolution 752*576) and with a declared sensitivity of 10 or even 100 times greater than that of “our” SONY.

We checked these numbers several times. In most cases, in cameras from different companies, we found the same ICX039 CCD matrix. Moreover, all the “piping” microcircuits were also SONY-made. And comparative testing showed almost complete identity of all these cameras. So what's the question?

And the whole question is at what signal-to-noise ratio (s/n) the sensitivity is determined. In our case, the SONY company conscientiously showed sensitivity at s/n = 46 dB, while other companies either did not indicate this or indicated it in such a way that it is unclear under what conditions these measurements were made.

This is, in general, a common scourge of most camera manufacturers - not specifying the conditions for measuring camera parameters.

The fact is that as the requirement for the S/N ratio decreases, the sensitivity of the camera increases in inverse proportion to the square of the required S/N ratio:

Where:
I - sensitivity;
K - conversion factor;
s/w - s/w ratio in linear units,

Therefore, many companies are tempted to indicate camera sensitivity at a low S/N ratio.

We can say that the ability of matrices to “see” better or worse is determined by the number of charges converted from photons incident on its surface and the quality of delivery of these charges to the output. The amount of accumulated charges depends on the area of ​​the photosensitive element and the quantum efficiency of the CCD matrix, and the quality of transportation is determined by many factors, which often come down to one thing - readout noise. The readout noise for modern matrices is on the order of 10-30 electrons or even less!

The areas of the elements of CCD matrices are different, but the typical value for 1/2 inch matrices for television cameras is 8.5 µm * 8.5 µm. An increase in the size of the elements leads to an increase in the size of the matrices themselves, which increases their cost not so much due to the actual increase in the production price, but due to the fact that the serial production of such devices is several orders of magnitude smaller. In addition, the area of ​​the photosensitive zone is affected by the topology of the matrix to the extent that the percentage of the total surface of the crystal is occupied by the sensitive area (fill factor). In some special matrices, the fill factor is stated to be 100%.

Quantum efficiency (how much on average the charge of a sensitive cell in electrons changes when one photon falls on its surface) for modern matrices is 0.4-0.6 (for some matrices without anti-blooming it reaches 0.85).

Thus, it can be seen that the sensitivity of CCD cameras, related to a certain S/N value, has come close to the physical limit. According to our conclusion, typical sensitivity values ​​of cameras for general use at s/w = 46 lie in the range of 0.15-0.25 lux of illumination on the object with a lens aperture of 1.4.

In this regard, we do not recommend blindly trusting the sensitivity figures indicated in the descriptions of television cameras, especially when the conditions for determining this parameter are not given and, if you see in the passport of a camera costing up to $500 a sensitivity of 0.01-0.001 lux in television mode, then before you are an example of, to put it mildly, incorrect information.

About ways to increase the sensitivity of CCD cameras

What do you do if you need to image a very faint object, such as a distant galaxy?

One way to solve this is to accumulate images over time. The implementation of this method can significantly increase the sensitivity of the CCD. Of course, this method can be applied to stationary objects of observation or in cases where movement can be compensated, as is done in astronomy.

Fig1 Planetary nebula M57.

Telescope: 60 cm, exposure - 20 seconds, temperature during exposure - 20 C.
At the center of the nebula there is a stellar object of magnitude 15.
The image was obtained by V. Amirkhanyan at the Special Astrophysical Observatory of the Russian Academy of Sciences.

It can be stated with reasonable accuracy that the sensitivity of CCD cameras is directly proportional to the exposure time.

For example, sensitivity at a shutter speed of 1 second relative to the original 1/50s will increase 50 times, i.e. it will be better - 0.005 lux.

Of course, there are problems along this path, and this is, first of all, the dark current of the matrices, which brings charges that accumulate simultaneously with the useful signal. The dark current is determined, firstly, by the manufacturing technology of the crystal, secondly, by the level of technology and, of course, to a very large extent by the operating temperature of the matrix itself.

Usually, to achieve long accumulation times, on the order of minutes or tens of minutes, the matrices are cooled to minus 20-40 degrees. C. The problem of cooling the matrices to such temperatures has been solved, but it is simply impossible to say that this cannot be done, since there are always design and operational problems associated with fogging of protective glasses and heat release from the hot junction of a thermoelectric refrigerator.

At the same time, technological progress in the production of CCD matrices has also affected such a parameter as dark current. Here the achievements are very significant and the dark current of some good modern matrices is very small. In our experience, cameras without cooling allow making exposures at room temperature within tens of seconds, and with dark background compensation up to several minutes. As an example, here is a photograph of the planetary nebula M57, obtained with the VS-a-tandem-56/2 video system without cooling with an exposure of 20 s.

The second way to increase sensitivity is the use of electron-optical converters (EOC). Image intensifiers are devices that enhance the luminous flux. Modern image intensifiers can have very large gain values, however, without going into details, we can say that the use of image intensifiers can only improve the threshold sensitivity of the camera, and therefore its gain should not be made too large.

Spectral sensitivity of CCD cameras


Fig.2 Spectral characteristics of various matrices

For some applications, the spectral sensitivity of the CCD is an important factor. Since all CCDs are made on the basis of silicon, in their “bare” form the spectral sensitivity of the CCD corresponds to this parameter of silicon (see Fig. 2).

As you can see, with all the variety of characteristics, CCD matrices have maximum sensitivity in the red and near-infrared (IR) range and see absolutely nothing in the blue-violet part of the spectrum. The near-IR sensitivity of CCDs is used in covert surveillance systems illuminated by IR light sources, as well as when measuring thermal fields of high-temperature objects.


Rice. 3 Typical spectral characteristics of SONY black-and-white matrices.

SONY produces all its black-and-white matrices with the following spectral characteristics (see Fig. 3). As you can see from this figure, the sensitivity of the CCD in the near IR is significantly reduced, but the matrix began to perceive the blue region of the spectrum.

For various special purposes, matrices sensitive in the ultraviolet and even X-ray range are being developed. Usually these devices are unique and their price is quite high.

About progressive and interlaced scanning

The standard television signal was developed for a broadcast television system, and from the point of view of modern image input and processing systems, it has one big drawback. Although the TV signal contains 625 lines (of which about 576 contain video information), 2 half-frames are displayed sequentially, consisting of even lines (even half-frame) and odd lines (odd half-frame). This leads to the fact that if a moving image is input, then the analysis cannot use a Y resolution of more than the number of lines in one half-frame (288). In addition, in modern systems, when the image is visualized on a computer monitor (which has progressive scan), the image input from the interlaced camera when the object is moving causes an unpleasant visual effect of doubling.

All methods to combat this shortcoming lead to a deterioration in vertical resolution. The only way to overcome this disadvantage and achieve resolution that matches the resolution of the CCD is to switch to progressive scanning in the CCD. CCD manufacturers produce such matrices, but due to the low production volume, the price of such matrices and cameras is much higher than that of conventional ones. For example, the price of a SONY matrix with progressive scan ICX074 is 3 times higher than ICX039 (interlace scan).

Other camera options

These others include such a parameter as “blooming”, i.e. spreading of the charge over the surface of the matrix when its individual elements are overexposed. In practice, such a case may occur, for example, when observing objects with glare. This is a rather unpleasant effect of CCDs, since a few bright spots can distort the entire image. Fortunately, many modern matrices contain anti-blooming devices. Thus, in the descriptions of some of the latest SONY matrices, we found 2000, which characterizes the permissible light overload of individual cells, which does not yet lead to charge spreading. This is a fairly high value, especially since this result can be achieved, as our experience has shown, only with special adjustment of the drivers that directly control the matrix and the video signal pre-amplification channel. In addition, the lens also makes its contribution to the “spreading” of bright points, since with such large light overloads, even small scattering beyond the main spot provides a noticeable light support for neighboring elements.

It is also necessary to note here that according to some data, which we have not verified ourselves, matrices with anti-blooming have a 2-fold lower quantum efficiency than matrices without anti-blooming. In this regard, in systems that require very high sensitivity, it may make sense to use matrices without anti-blooming (usually these are special tasks such as astronomical ones).

About color cameras

The materials in this section somewhat go beyond the scope of consideration of measuring systems that we have established, however, the widespread use of color cameras (even more than black and white) forces us to clarify this issue, especially since Customers often try to use black and white cameras with our cameras. color television cameras with white frame grabbers, and they are very surprised when they find some stains in the resulting images, and the resolution of the images turns out to be insufficient. Let's explain what's going on here.

There are 2 ways to generate a color signal:

  • 1. use of a single matrix camera.
  • 2. use of a system of 3 CCD matrices with a color separation head to obtain R, G, B components of the color signal on these matrices.

The second way provides the best quality and is the only way to obtain measurement systems; however, cameras operating on this principle are quite expensive (more than $3000).

In most cases, single-chip CCD cameras are used. Let's look at their operating principle.

As is clear from the fairly wide spectral characteristics of the CCD matrix, it cannot determine the “color” of a photon hitting the surface. Therefore, in order to enter a color image, a light filter is installed in front of each element of the CCD matrix. In this case, the total number of matrix elements remains the same. SONY, for example, produces exactly the same CCD matrices for black-and-white and color versions, which differ only in the presence of a grid of light filters in the color matrix, applied directly to the sensitive areas. There are several matrix coloring schemes. Here is one of them.

Here 4 different filters are used (see Fig. 4 and Fig. 5).


Figure 4. Distribution of filters on CCD matrix elements



Figure 5. Spectral sensitivity of CCD elements with various filters.

Y=(Cy+G)+(Ye+Mg)

In line A1 the "red" color difference signal is obtained as:

R-Y=(Mg+Ye)-(G+Cy)

and in line A2 a “blue” color difference signal is obtained:

-(B-Y)=(G+Ye)-(Mg+Cy)

It is clear from this that the spatial resolution of a color CCD matrix, compared to the same black and white one, is usually 1.3-1.5 times worse horizontally and vertically. Due to the use of filters, the sensitivity of a color CCD is also worse than that of a black and white one. Thus, we can say that if you have a single-matrix receiver 1000 * 800, then you can actually get about 700 * 550 for the brightness signal and 500 * 400 (700 * 400 is possible) for the color signal.

Leaving aside technical issues, I would like to note that for advertising purposes, many manufacturers of electronic cameras report completely incomprehensible data on their equipment. For example, the Kodak company announces the resolution of its DC120 electronic camera as 1200*1000 with a matrix of 850x984 pixels. But gentlemen, information does not appear out of nowhere, although visually it looks good!

The spatial resolution of a color signal (a signal that carries information about the color of the image) can be said to be at least 2 times worse than the resolution of a black-and-white signal. In addition, the “calculated” color of the output pixel is not the color of the corresponding element of the source image, but only the result of processing the brightness of various elements of the source image. Roughly speaking, due to the sharp difference in brightness of neighboring elements of an object, a color that is not there at all can be calculated, while a slight camera shift will lead to a sharp change in the output color. For example: the border of a dark and light gray field will look like it consists of multi-colored squares.

All these considerations relate only to the physical principle of obtaining information on color CCD matrices, while it must be taken into account that usually the video signal at the output of color cameras is presented in one of the standard formats PAL, NTSC, or less often S-video.

The PAL and NTSC formats are good because they can be immediately reproduced on standard monitors with a video input, but we must remember that these standards provide a significantly narrower band for the color signal, so it is more correct to talk here about a colorized, rather than a color image. Another unpleasant feature of cameras with video signals that carry a color component is the appearance of the above-mentioned streaks in the image obtained by black-and-white frame grabbers. And the point here is that the chrominance signal is located almost in the middle of the video signal band, creating interference when entering an image frame. We do not see this interference on a television monitor because the phase of this “interference” is reversed after four frames and averaged by the eye. Hence the bewilderment of the Customer, who receives an image with interference that he does not see.

It follows from this that if you need to carry out some measurements or decipher objects by color, then this issue must be approached taking into account both the above and other features of your task.

About CMOS matrices

In the world of electronics, everything is changing very quickly, and although the field of photodetectors is one of the most conservative, new technologies have been approaching here recently. First of all, this relates to the emergence of CMOS television matrices.

Indeed, silicon is a light-sensitive element and any semiconductor product can be used as a sensor. The use of CMOS technology provides several obvious advantages over traditional technology.

Firstly, CMOS technology is well mastered and allows the production of elements with a high yield of useful products.

Secondly, CMOS technology allows you to place on the matrix, in addition to the photosensitive area, various framing devices (up to the ADC), which were previously installed “outside”. This makes it possible to produce cameras with digital output “on a single chip.”

Thanks to these advantages, it becomes possible to produce significantly cheaper television cameras. In addition, the range of companies producing matrices is expanding significantly.

At the moment, the production of television matrices and cameras based on CMOS technology is just getting started. Information about the parameters of such devices is very scarce. We can only note that the parameters of these matrices do not exceed what is currently achieved; as for the price, their advantages are undeniable.

Let me give as an example a single-chip color camera from Photobit PB-159. The camera is made on a single chip and has the following technical parameters:

  • resolution - 512*384;
  • pixel size - 7.9µm*7.9µm;
  • sensitivity - 1 lux;
  • output - digital 8-bit SRGB;
  • body - 44 PLCC legs.

Thus, the camera loses four times in sensitivity, in addition, from information on another camera it is clear that this technology has problems with a relatively large dark current.

About digital cameras

Recently, a new market segment has emerged and is rapidly growing, using CCD and CMOS matrices - digital cameras. Moreover, at the moment there is a sharp increase in the quality of these products simultaneously with a sharp decrease in price. Indeed, just 2 years ago, a matrix with a resolution of 1024*1024 alone cost about $3000-7000, but now cameras with such matrices and a bunch of bells and whistles (LCD screen, memory, variable lens, convenient body, etc.) can be bought for less than $1000 . This can only be explained by the transition to large-scale production of matrices.

Since these cameras are based on CCD and CMOS matrices, all the discussions in this article about sensitivity and the principles of color signal formation are valid for them.

Instead of a conclusion

The practical experience we have accumulated allows us to draw the following conclusions:

  • The production technology of CCD matrices in terms of sensitivity and noise is very close to physical limits;
  • on the television camera market you can find cameras of acceptable quality, although adjustments may be required to achieve higher parameters;
  • Do not be fooled by the high sensitivity figures given in camera brochures;
  • And yet, prices for cameras that are absolutely identical in quality and even for simply identical cameras from different sellers can differ by more than twice!

The matrix is ​​the main structural element of the camera and one of the key parameters taken into account by the user when choosing a camera. The matrices of modern digital cameras can be classified according to several signs, but the main and most common one is still dividing the matrices according to charge reading method, on: matrices CCD type and CMOS matrices. In this article we will look at the principles of operation, as well as the advantages and disadvantages of these two types of matrices, since they are the ones that are widely used in modern photographic and video equipment.

CCD matrix

Matrix CCD also called CCD matrix(Charge Coupled Devices). CCD The matrix is ​​a rectangular plate of photosensitive elements (photodiodes) located on a semiconductor silicon crystal. The principle of its operation is based on the line-by-line movement of charges that have accumulated in the holes formed by photons in silicon atoms. That is, when colliding with a photodiode, a photon of light is absorbed and an electron is released (an internal photoelectric effect occurs). As a result, a charge is formed that must be somehow stored for further processing. For this purpose, a semiconductor is built into the silicon substrate of the matrix, above which a transparent electrode made of polycrystalline silicon is located. And as a result of applying an electric potential to this electrode, a so-called potential well is formed in the depletion zone under the semiconductor, in which the charge received from photons is stored. When reading electric charge from the matrix, charges (stored in potential wells) are transferred along the transfer electrodes to the edge of the matrix (serial shift register) and towards the amplifier, which amplifies the signal and transmits it to an analog-to-digital converter (ADC), from where the converted signal is sent into a processor that processes the signal and saves the resulting image to a memory card .

Polysilicon photodiodes are used to produce CCD matrices. Such matrices are small in size and allow you to obtain fairly high-quality photographs when shooting in normal lighting.

Advantages of CCDs:

  1. The design of the matrix provides a high density of placement of photocells (pixels) on the substrate;
  2. High efficiency (the ratio of registered photons to their total number is about 95%);
  3. High sensitivity;
  4. Good color rendering (with sufficient lighting).

Disadvantages of CCDs:

  1. High noise level at high ISO (at low ISO, noise level is moderate);
  2. Low operating speed compared to CMOS matrices;
  3. High power consumption;
  4. More complex signal reading technology, since many control chips are required;
  5. Production is more expensive than CMOS matrices.

CMOS matrix

Matrix CMOS, or CMOS matrix(Complementary Metal Oxide Semiconductors) uses active point sensors. Unlike CCDs, CMOS sensors contain a separate transistor in each light-sensitive element (pixel), as a result of which charge conversion is performed directly in the pixel. The resulting charge can be read from each pixel individually, eliminating the need for charge transfer (as occurs with CCDs). The pixels of the CMOS sensor are integrated directly with the analog-to-digital converter or even the processor. As a result of the use of such rational technology, energy savings occur due to a reduction in chains of actions compared to CCD matrices, as well as a reduction in the cost of the device due to a simpler design.


Brief operating principle of a CMOS sensor: 1) Before shooting, a reset signal is applied to the reset transistor. 2) During exposure, light penetrates through the lens and filter to the photodiode and, as a result of photosynthesis, a charge accumulates in the potential well. 3) The value of the received voltage is read. 4) Data processing and image saving.

Advantages of CMOS sensors:

  1. Low power consumption (especially in standby modes);
  2. High performance;
  3. Requires less production costs due to the similarity of the technology with the production of microcircuits;
  4. The unity of technology with other digital elements, which allows you to combine analog, digital and processing parts on one chip (i.e., in addition to capturing light in a pixel, you can convert, process and clear the signal from noise).
  5. Possibility of random access to each pixel or group of pixels, which allows you to reduce the size of the captured image and increase the readout speed.

Disadvantages of CMOS matrices:

  1. The photodiode occupies a small pixel area, resulting in low light sensitivity of the matrix, but in modern CMOS matrices this disadvantage has been practically eliminated;
  2. The presence of thermal noise from heating transistors inside the pixel during the reading process.
  3. Relatively large in size, photoequipment with this type of matrix is ​​characterized by large weight and size.

In addition to the above types, there are also three-layer matrices, each layer of which is a CCD. The difference is that the cells can simultaneously perceive three colors, which are formed by dichroic prisms when a beam of light hits them. Each beam is then directed to a separate matrix. As a result, the brightness of blue, red and green colors is determined immediately on the photocell. Three-layer matrices are used in high-level video cameras, which have a special designation - 3CCD.

To summarize, I would like to note that with the development of technologies for the production of CCD and CMOS matrices, their characteristics also change, so it is increasingly difficult to say which of the matrices is definitely better, but at the same time, CMOS matrices have recently become increasingly popular in the production of SLR cameras. Based on the characteristic features of various types of matrices, one can get a clear idea of ​​why professional photographic equipment that provides high quality shooting is quite bulky and heavy. This information should definitely be remembered when choosing a camera - that is, take into account the physical dimensions of the matrix, and not the number of pixels.

CCD matrix(abbreviated from " P ribor with h aryadova With ligature") or CCD matrix(abbreviated from English CCD, "Charge-Coupled Device") - specialized analog integrated circuit, consisting of photosensitive photodiodes, made on the basis silicon, using technology CCD- devices with charge coupling.

CCD matrices are produced and actively used by companies Nikon, Canon, Sony, Fuji, Kodak, Matsushita, Philips and many others. In Russia, CCD matrices are today developed and produced by NPP ELAR CJSC, St. Petersburg.

    1 History of the CCD

    2 General structure and principle of operation

    • 2.1 Example of a CCD subpixel with an n-type pocket

    3 Classification by buffering method

    • 3.1 Full frame transfer sensors

      3.2 Frame-buffered matrices

      3.3 Column-buffered matrices

    4 Classification by sweep type

    • 4.1 Matrixes for video cameras

    5 Dimensions of photographic matrices

    6 Some special types of matrices

    • 6.1 Photosensitive rulers

      6.2 Coordinate and angle sensors

      6.3 Back-illuminated matrices

    7 Photosensitivity

    8 see also

    9 Notes

History of the CCD

The charge-coupled device was invented in 1969 Willard Boyle And George Smith at Bell Laboratories (AT&T Bell Labs). Laboratories were working on video telephony ( English picture phone) and the development of “semiconductor bubble memory” ( English semiconductor bubble memory ). Charge-coupled devices began life as memory devices in which a charge could only be placed into the device's input register. However, the ability of the device's memory element to receive a charge due to photoelectric effect made this application of CCD devices the main one.

IN 1970 researchers Bell Labs learned to capture images using simple linear devices.

Subsequently, under the leadership of Katsuo Iwama ( Kazuo Iwama) company Sony became actively involved in CCDs, investing heavily in it, and managed to establish mass production of CCDs for its video cameras.

Iwama died in August 1982. Chip CCD was placed on his tombstone to commemorate his contributions.

In January 2006 for work on CCD W. Boyle And J. Smith were awarded US National Academy of Engineering (English National Academy of Engineering).

IN 2009 these CCD creators were awarded Nobel Prize in Physics.

General structure and principle of operation

The CCD matrix consists of polysilicon, separated from the silicon substrate, in which, when voltage is applied through polysilicon gates, the electrical potentials in the vicinity change electrodes.

Before exposure, usually by applying a certain combination of voltages to the electrodes, all previously formed charges are reset and all elements are brought into an identical state.

Next, the combination of voltages on the electrodes creates a potential well in which electrons formed in a given pixel of the matrix as a result of exposure to light during exposure can accumulate. The more intense the luminous flux during exposition, the more it accumulates electrons in a potential well, accordingly, the higher the final charge of a given pixel.

After exposure, successive changes in the voltage on the electrodes form a potential distribution in each pixel and next to it, which leads to the flow of charge in a given direction, to the output elements of the matrix.

Example of a CCD subpixel with an n-type pocket

Manufacturers have different pixel architectures.

Diagram of subpixels of a CCD matrix with an n-type pocket (using the example of a red photodetector)

Symbols on the subpixel diagram CCD:

    1 - Photons of light passing through the camera lens;

    2 - Subpixel microlens;

    3 - R - red subpixel filter, fragment Bayer filter;

    4 - Transparent electrode made of polycrystalline silicon or tin oxide;

    5 - Insulator (silicon oxide);

    6 - N-type silicon channel. Carrier generation zone (internal photoelectric effect zone);

    7 - Potential well zone (n-type pocket), where electrons from the carrier generation zone are collected;

    8 - p-type silicon substrate;

Classification by buffering method

[Full frame transfer sensors

Frame-buffered matrices

Column-buffered matrices

Dimensions of photographic matrices

Coordinate and angle sensors

Back-illuminated matrices

In the classical CCD circuit, which uses polycrystalline silicon electrodes, light sensitivity is limited due to partial light scattering by the electrode surface. Therefore, when shooting under special conditions that require increased photosensitivity in the blue and ultraviolet regions of the spectrum, back-illuminated matrices are used ( English back- illuminated matrix). In sensors of this type, the recorded light falls onto the substrate, but for the required internal photoeffect, the substrate is ground to a thickness of 10-15 µm. This processing stage significantly increased the cost of the matrix; the devices turned out to be very fragile and required increased care during assembly and operation. And when using filters that weaken the light flux, all expensive operations to increase sensitivity become meaningless. Therefore, back-illuminated matrices are mainly used in astronomical photography.

Photosensitivity

The sensitivity of the matrix consists of the photosensitivity of all its photo sensors(pixels) and generally depends on:

    integral photosensitivity, which is the ratio of the quantity photoelectric effect To light flux (in lumens) from a radiation source of normalized spectral composition;

    monochromatic photosensitivity"- magnitude ratio photoelectric effect to the size light radiation energy (in millielectronvolts) corresponding to a certain wavelength;

    set of all monochromatic ISO values ​​for the selected part spectrum light is spectral photosensitivity- dependence of photosensitivity on the wavelength of light;