Find the algebraic complement of a23 if the matrix is ​​known. Algebraic complement

MinorM ij element a ij determinant n -th order is called the order determinant ( n-1 ), obtained from a given determinant by crossing out the row and column in which this element is located ( i -th line and j th column).

Algebraic complement element a ij is given by the expression:

Determinants of order n>3 are calculated using the theorem about the expansion of the determinant into the elements of a row or column:

Theorem. The determinant is equal to the sum of the products of the elements of any row or any column by the algebraic complements corresponding to these elements, i.e.

Example.

Calculate the determinant by decomposing it into elements of a row or column:

Solution

1. If in any one row or one column there is only one element other than zero, then there is no need to transform the determinant. Otherwise, before applying the theorem on the decomposition of the determinant, we transform it using the following property: if we add to the elements of a row (column) the corresponding elements of another row (column), multiplied by an arbitrary factor, then the value of the determinant will not change.

From the elements of line 3 we subtract the corresponding elements of line 2.

From the elements of column 4, subtract the corresponding elements of column 3, multiplied by 2.

We expand the determinant into the elements of the third row

2. The resulting 3rd order determinant can be calculated using the triangle rule or Sarrus’ rule (see above). However, the elements of the determinant are quite large numbers, so let’s expand the determinant by first transforming it:

From the elements of the second line, subtract the corresponding elements of the first line, multiplied by 3.

From the elements of the first line we subtract the corresponding elements of the third line.

To the elements of line 1 we add the corresponding elements of line 2

The zero-row determinant is 0.

So, the order determinants n>3 are calculated:

· transforming the determinant to triangular form using the properties of determinants;

· decomposition of the determinant into terms or column elements, thereby lowering its order.

Matrix rank.

The rank of a matrix is ​​an important numerical characteristic. The most typical problem that requires finding the rank of a matrix is ​​checking the consistency of a system of linear algebraic equations.

Let's take the matrix A order p x n . Let k – some natural number not exceeding the smallest number p And n , that is,

Minor kth order matrices A is called the determinant of a square matrix of order k x k , composed of matrix elements A , which are in pre-selected k lines and k columns, and the arrangement of matrix elements A is saved.

Consider the matrix:

Let's write down several first-order minors of this matrix. For example, if we select the third row and second column of the matrix A , then our choice corresponds to the first-order minor det(-4)=-4. In other words, to obtain this minor we deleted the first and second rows, as well as the first, third and fourth columns from the matrix A , and from the remaining element they made up a determinant.

Thus, the first-order minors of a matrix are the matrix elements themselves.

Let's show several second-order minors. Select two rows and two columns. For example, take the first and second rows, and the third and fourth columns. With this choice we have a second-order minor
.

Another minor of the second order of the matrix A is minor

Similarly, third-order minors of the matrix can be found A . Since in the matrix A There are only three lines, then select them all. If we select the first three columns of these rows, we get a third-order minor:

Another third order minor is:

For a given matrix A there are no minors of order higher than third, since

How many minors are there? k -Wow matrix order A order p x n ? Quite a lot!

Number of minors of order k can be calculated using the formula:

Matrix rank is called the highest order of the non-zero minor of a matrix.

Matrix rank A denoted as rank(A). From the definitions of matrix rank and matrix minor, we can conclude that the rank of a zero matrix is ​​equal to zero, and the rank of a nonzero matrix is ​​not less than one.

So, the first method for finding the rank of a matrix is method of enumerating minors . This method is based on determining the rank of the matrix.

Let us need to find the rank of the matrix A order p x n .

If there is at least one element of the matrix that is different from zero, then the rank of the matrix is ​​at least equal to one (since there is a first-order minor that is not equal to zero).

Next we look at the second order minors. If all second-order minors are equal to zero, then the rank of the matrix is ​​equal to one. If there is at least one non-zero minor of the second order, then we proceed to enumerate the minors of the third order, and the rank of the matrix is ​​at least equal to two.

Similarly, if all third-order minors are zero, then the rank of the matrix is ​​two. If there is at least one third-order minor other than zero, then the rank of the matrix is ​​at least three, and we move on to enumerating fourth-order minors.

Note that the rank of the matrix cannot exceed the smallest number p And n .

Example.

Find the rank of the matrix
.

Solution.

1. Since the matrix is ​​non-zero, its rank is not less than one.

2. One of the second order minors
is different from zero, hence the rank of the matrix A at least two.

3. Third order minors

All third order minors are equal to zero. Therefore, the rank of the matrix is ​​two.

rank(A) = 2.

There are other methods for finding the rank of a matrix that allow you to obtain the result with less computational work.

One such method is edge minor method . Using this method, the calculations are somewhat reduced, but they are still quite cumbersome.

There is another way to find the rank of a matrix - using elementary transformations (Gaussian method).

The following matrix transformations are called elementary :

· rearrangement of rows (or columns) of the matrix;

· multiplying all elements of any row (column) of a matrix by an arbitrary number k, different from zero;

· adding to the elements of any row (column) the corresponding elements of another row (column) of the matrix, multiplied by an arbitrary number k.

Matrix B is called equivalent to matrix A, If IN derived from A using a finite number of elementary transformations. Matrix equivalence is indicated by the symbol « ~ » , that is, is written A~B.

Finding the rank of a matrix using elementary matrix transformations is based on the statement: if the matrix IN obtained from matrix A using a finite number of elementary transformations, then r ang(A) = rang(B) , i.e. the ranks of equivalent matrices are equal .

The essence of the method of elementary transformations is to reduce the matrix, the rank of which we need to find, to a trapezoidal one (in a particular case, to an upper triangular one) using elementary transformations.

The rank of matrices of this type is very easy to find. It is equal to the number of lines containing at least one non-zero element. And since the rank of the matrix does not change when carrying out elementary transformations, the resulting value will be the rank of the original matrix.

Example.

Using the method of elementary transformations, find the rank of the matrix

.

Solution.

1. Swap the first and second rows of the matrix A , since the element a 11 =0, and the element a 21 non-zero:

~

In the resulting matrix, the element is equal to one. Otherwise, you had to multiply the elements of the first row by . Let's make all elements of the first column, except the first one, zero. In the second line there is already a zero, to the third line we add the first one, multiplied by 2:


The element in the resulting matrix is ​​different from zero. Multiply the elements of the second row by

The second column of the resulting matrix has the desired form, since the element is already equal to zero.

Because , A , then swap the third and fourth columns and multiply the third row of the resulting matrix by:

The original matrix is ​​reduced to trapezoidal, its rank is equal to the number of rows containing at least one non-zero element. There are three such rows, therefore the rank of the original matrix is ​​three. r ang(A)=3.


Inverse matrix.

Let us have a matrix A .

Matrix inverse to matrix A , is called a matrix A-1 such that A -1 A = A A -1 = E .

An inverse matrix can only exist for a square matrix. Moreover, it itself is of the same dimension as the original matrix.

In order for a square matrix to have an inverse, it must be non-singular (i.e. Δ ≠0 ). This condition is also sufficient for the existence A-1 to the matrix A . So, every non-singular matrix has an inverse, and, moreover, a unique one.

Algorithm for finding the inverse matrix using the example of a matrix A :

1. Find the determinant of the matrix. If Δ ≠0 , then the matrix A-1 exists.

2. Let's create a matrix B of algebraic additions of elements of the original matrix A . Those. in the matrix IN element i - oh lines and j - the th column will be the algebraic complement A ij element a ij original matrix.

3. Transpose the matrix IN and we get B t .

4. Find the inverse matrix by multiplying the resulting matrix B t per number .

Example.

For a given matrix, find the inverse and check:

Solution

Let's use the previously described algorithm for finding the inverse matrix.

1. To determine the existence of an inverse matrix, it is necessary to calculate the determinant of this matrix. Let's use the triangle rule:

The matrix is ​​non-singular, therefore, it is invertible.

Let's find the algebraic complements of all matrix elements:



From the found algebraic additions the matrix is ​​compiled:

and is transposed

Dividing each element of the resulting matrix by its determinant, we obtain a matrix inverse to the original one:

The check is carried out by multiplying the resulting matrix by the original one. If the inverse matrix is ​​found correctly, the result of the multiplication is the identity matrix.

To find the inverse matrix for a given one, you can use the Gaussian method (of course, you must first make sure that the matrix is ​​invertible), which I leave for independent work.


©2015-2019 site
All rights belong to their authors. This site does not claim authorship, but provides free use.
Page creation date: 2017-10-12

    Algebraic complement- concept of matrix algebra; in relation to the element aij of the square matrix A is formed by multiplying the minor of the element aij by (1)i+j; is denoted by Аij: Aij=(1)i+jMij, where Mij is the minor of the element aij of the matrix A=, i.e. determinant... ... Economic-mathematical dictionary

    algebraic complement- The concept of matrix algebra; in relation to the element aij of the square matrix A is formed by multiplying the minor of the element aij by (1)i+j; is denoted by Аij: Aij=(1)i+jMij, where Mij is the minor of the element aij of the matrix A=, i.e. matrix determinant,... ... Technical Translator's Guide

    Algebraic complement- see art. Determinant... Great Soviet Encyclopedia

    ALGEBRAIC COMPLEMENT- for minor M a number equal to where M is a minor of order k, located in rows with numbers and columns with numbers of some square matrix A of order n; determinant of a matrix of order n k obtained from the matrix A by deleting the rows and columns of the minor M;... ... Mathematical Encyclopedia

    Addition- Wiktionary has an entry for "addition" Addition can mean... Wikipedia

    ADDITION- an operation that puts a subset of a given set X in correspondence with another subset so that if Mi N are known, then the set X can be restored in one way or another. Depending on what structure the set X is endowed with,... ... Mathematical Encyclopedia

    DETERMINANT- or determinant, in mathematics, recording numbers in the form of a square table, in correspondence with which another number (the value of the determinant) is placed. Very often, the concept of determinant means both the meaning of the determinant and the form of its recording.… … Collier's Encyclopedia

    Laplace's theorem- For a theorem from probability theory, see the article Local theorem of Moivre-Laplace. Laplace's theorem is one of the theorems of linear algebra. Named after the French mathematician Pierre Simon Laplace (1749 1827), who is credited with formulating ... ... Wikipedia

    Kirchhoff matrix- (Laplacian matrix) one of the representations of a graph using a matrix. The Kirchhoff matrix is ​​used to count the spanning trees of a given graph (matrix tree theorem) and is also used in spectral graph theory. Contents 1... ...Wikipedia

    EQUATIONS- An equation is a mathematical relationship expressing the equality of two algebraic expressions. If an equality is true for any admissible values ​​of the unknowns included in it, then it is called an identity; for example, a ratio of the form... ... Collier's Encyclopedia

Books

  • Discrete mathematics, A. V. Chashkin. 352 pp. The textbook consists of 17 chapters on the main sections of discrete mathematics: combinatorial analysis, graph theory, Boolean functions, computational complexity and coding theory. Contains...

determinant by elements of a row or column

Further properties are related to the concepts of minor and algebraic complement

Definition. Minor element is called a determinant made up of elements remaining after crossing outi-th drains andjth column at the intersection of which this element is located. Minor of the element of the determinant n-th order has order ( n- 1). We will denote it by .

Example 1. Let , Then .

This minor is obtained from A by crossing out the second row and third column.

Definition. Algebraic complement element is called the corresponding minor, multiplied by nat.e , Wherei–line number andj-columns at the intersection of which this element is located.

VІІІ. (Decomposition of the determinant into elements of a certain string). The determinant is equal to the sum of the products of the elements of a certain row and their corresponding algebraic complements.

.

Example 2. Let it be then

.

Example 3. Let's find the determinant of the matrix by expanding it into the elements of the first row.

Formally, this theorem and other properties of determinants are applicable only for determinants of matrices of no higher than third order, since we have not considered other determinants. The following definition will allow us to extend these properties to determinants of any order.

Definition. Determinant matrices A nth order is a number calculated by sequential application of the expansion theorem and other properties of determinants.

You can check that the result of the calculations does not depend on the order in which the above properties are applied and for which rows and columns. Using this definition, the determinant is uniquely found.

Although this definition does not contain an explicit formula for finding the determinant, it allows one to find it by reducing it to the determinants of matrices of lower order. Such definitions are called recurrent.

Example 4. Calculate the determinant: .

Although the factorization theorem can be applied to any row or column of a given matrix, fewer computations are obtained by factoring along the column that contains as many zeros as possible.

Since the matrix does not have zero elements, we obtain them using property 7). Multiply the first line sequentially by the numbers (–5), (–3) and (–2) and add it to the 2nd, 3rd and 4th lines and get:

Let's expand the resulting determinant along the first column and get:

(we take (–4) from the 1st line, (–2) from the 2nd line, (–1) from the 3rd line according to property 4)

(since the determinant contains two proportional columns).

§ 1.3. Some types of matrices and their determinants

Definition. Square m an matrix with zero elements below or above the main diagonal(=0 at ij, or =0 at ij) calledtriangular .

In this topic we will consider the concepts of algebraic complement and minor. The presentation of the material is based on the terms explained in the topic "Matrixes. Types of matrices. Basic terms". We will also need some formulas for calculating determinants. Since this topic contains a lot of terms related to minors and algebraic complements, I will add a brief summary to make it easier to navigate the material.

Minor $M_(ij)$ of element $a_(ij)$

$M_(ij)$ element$a_(ij)$ matrices $A_(n\times n)$ name the determinant of the matrix obtained from matrix $A$ by deleting the i-th row and j-th column (i.e., the row and column at the intersection of which the element is located $a_(ij)$).

For example, consider a fourth-order square matrix: $A=\left(\begin(array) (ccc) 1 & 0 & -3 & 9\\ 2 & -7 & 11 & 5 \\ -9 & 4 & 25 & 84 \\ 3 & 12 & -5 & 58 \end(array) \right)$. Let's find the minor of the element $a_(32)$, i.e. let's find $M_(32)$. First, let's write down the minor $M_(32)$ and then calculate its value. In order to compose $M_(32)$, we delete the third row and second column from the matrix $A$ (it is at the intersection of the third row and the second column that the element $a_(32)$ is located). We will obtain a new matrix, the determinant of which is the required minor $M_(32)$:

This minor is easy to calculate using formula No. 2 from the calculation topic:

$$ M_(32)=\left| \begin(array) (ccc) 1 & -3 & 9\\ 2 & 11 & 5 \\ 3 & -5 & 58 \end(array) \right|= 1\cdot 11\cdot 58+(-3) \cdot 5\cdot 3+2\cdot (-5)\cdot 9-9\cdot 11\cdot 3-(-3)\cdot 2\cdot 58-5\cdot (-5)\cdot 1=579. $$

So, the minor of the element $a_(32)$ is 579, i.e. $M_(32)=579$.

Often, instead of the phrase “matrix element minor” in the literature, “determinant element minor” is found. The essence remains the same: to obtain the minor of the element $a_(ij)$, you need to cross out the i-th row and j-th column from the original determinant. The remaining elements are written into a new determinant, which is the minor of the element $a_(ij)$. For example, let's find the minor of the element $a_(12)$ of the determinant $\left| \begin(array) (ccc) -1 & 3 & 2\\ 9 & 0 & -5 \\ 4 & -3 & 7 \end(array) \right|$. To write down the required minor $M_(12)$ we need to delete the first row and second column from the given determinant:

To find the value of this minor, we use formula No. 1 from the topic of calculating determinants of the second and third orders:

$$ M_(12)=\left| \begin(array) (ccc) 9 & -5\\ 4 & 7 \end(array) \right|=9\cdot 7-(-5)\cdot 4=83. $$

So, the minor of the element $a_(12)$ is 83, i.e. $M_(12)=83$.

Algebraic complement $A_(ij)$ of element $a_(ij)$

Let a square matrix $A_(n\times n)$ be given (i.e., a square matrix of nth order).

Algebraic complement$A_(ij)$ element$a_(ij)$ of matrix $A_(n\times n)$ is found by the following formula: $$ A_(ij)=(-1)^(i+j)\cdot M_(ij), $$

where $M_(ij)$ is the minor of element $a_(ij)$.

Let us find the algebraic complement of element $a_(32)$ of the matrix $A=\left(\begin(array) (ccc) 1 & 0 & -3 & 9\\ 2 & -7 & 11 & 5 \\ -9 & 4 & 25 & 84\\ 3 & 12 & -5 & 58 \end(array) \right)$, i.e. let's find $A_(32)$. We previously found the minor $M_(32)=579$, so we use the result obtained:

Usually, when finding algebraic complements, the minor is not calculated separately, and only then the complement itself. The minor note is omitted. For example, let's find $A_(12)$ if $A=\left(\begin(array) (ccc) -5 & 10 & 2\\ 6 & 9 & -4 \\ 4 & -3 & 1 \end( array) \right)$. According to the formula $A_(12)=(-1)^(1+2)\cdot M_(12)=-M_(12)$. However, to get $M_(12)$ it is enough to cross out the first row and second column of the matrix $A$, so why introduce an extra notation for the minor? Let’s immediately write down the expression for the algebraic complement $A_(12)$:

Minor of the kth order of the matrix $A_(m\times n)$

If in the previous two paragraphs we talked only about square matrices, then here we will also talk about rectangular matrices, in which the number of rows does not necessarily equal the number of columns. So, let the matrix $A_(m\times n)$ be given, i.e. a matrix containing m rows and n columns.

Minor kth order matrix $A_(m\times n)$ is a determinant whose elements are located at the intersection of k rows and k columns of matrix $A$ (it is assumed that $k≤ m$ and $k≤ n$).

For example, consider the matrix $A=\left(\begin(array) (ccc) -1 & 0 & -3 & 9\\ 2 & 7 & 14 & 6 \\ 15 & -27 & 18 & 31\\ 0 & 1 & 19 & 8\\ 0 & -12 & 20 & 14\\ 5 & 3 & -21 & 9\\ 23 & -10 & -5 & 58 \end(array) \right)$ and write down what -or third order minor. To write a third-order minor, we need to select any three rows and three columns of this matrix. For example, take rows numbered 2, 4, 6 and columns numbered 1, 2, 4. At the intersection of these rows and columns the elements of the required minor will be located. In the figure, the minor elements are shown in blue:

First order minors are found at the intersection of one row and one column, i.e. first order minors are equal to the elements of a given matrix.

The kth order minor of the matrix $A_(m\times n)=(a_(ij))$ is called main, if on the main diagonal of a given minor there are only the main diagonal elements of the matrix $A$.

Let me remind you that the main diagonal elements are those elements of the matrix whose indices are equal: $a_(11)$, $a_(22)$, $a_(33)$ and so on. For example, for the matrix $A$ considered above, such elements will be $a_(11)=-1$, $a_(22)=7$, $a_(33)=18$, $a_(44)=8$. They are highlighted in pink in the figure:

For example, if in the matrix $A$ we cross out the rows and columns numbered 1 and 3, then at their intersection there will be elements of a minor of the second order, on the main diagonal of which there will be only diagonal elements of the matrix $A$ (elements $a_(11) =-1$ and $a_(33)=18$ of matrix $A$). Therefore, we get a second-order principal minor:

Naturally, we could take other rows and columns, for example, with numbers 2 and 4, thereby obtaining a different principal minor of the second order.

Let some minor $M$ of the kth order of the matrix $A_(m\times n)$ be not equal to zero, i.e. $M\neq 0$. In this case, all minors whose order is higher than k are equal to zero. Then the minor $M$ is called basic, and the rows and columns on which the elements of the basic minor are located are called base strings And base columns.

For example, consider the matrix $A=\left(\begin(array) (ccc) -1 & 0 & 3 & 0 & 0 \\ 2 & 0 & 4 & 1 & 0\\ 1 & 0 & -2 & -1 & 0\\ 0 & 0 & 0 & 0 & 0 \end(array) \right)$. Let us write the minor of this matrix, the elements of which are located at the intersection of rows numbered 1, 2, 3 and columns numbered 1, 3, 4. We get a third-order minor:

Let's find the value of this minor using formula No. 2 from the topic of calculating determinants of the second and third orders:

$$ M=\left| \begin(array) (ccc) -1 & 3 & 0\\ 2 & 4 & 1 \\ 1 & -2 & -1 \end(array) \right|=4+3+6-2=11. $$

So, $M=11\neq 0$. Now let's try to compose any minor whose order is higher than three. To make a fourth-order minor, we have to use the fourth row, but all the elements of this row are zero. Therefore, any fourth-order minor will have a zero row, which means that all fourth-order minors are equal to zero. We cannot create minors of the fifth and higher orders, since the matrix $A$ has only 4 rows.

We have found a third order minor that is not equal to zero. In this case, all minors of higher orders are equal to zero, therefore, the minor we considered is basic. The rows of the matrix $A$ on which the elements of this minor are located (the first, second and third) are the basic rows, and the first, third and fourth columns of the matrix $A$ are the basic columns.

This example, of course, is trivial, since its purpose is to clearly show the essence of the basic minor. In general, there can be several basic minors, and usually the process of searching for such a minor is much more complex and extensive.

Let's introduce another concept - bordering minor.

Let some kth order minor $M$ of the matrix $A_(m\times n)$ be located at the intersection of k rows and k columns. Let's add another row and column to the set of these rows and columns. The resulting minor of (k+1)th order is called edge minor for minor $M$.

For example, let's look at the matrix $A=\left(\begin(array) (ccc) -1 & 2 & 0 & -2 & -14\\ 3 & -17 & -3 & 19 & 29\\ 5 & -6 & 8 & -9 & 41\\ -5 & 11 & 19 & -20 & -98\\ 6 & 12 & 20 & 21 & 54\\ -7 & 10 & 14 & -36 & 79 \end(array) \right)$. Let's write a second-order minor, the elements of which are located at the intersection of rows No. 2 and No. 5, as well as columns No. 2 and No. 4.

Let's add another row No. 1 to the set of rows on which the elements of the minor $M$ lie, and column No. 5 to the set of columns. We obtain a new minor $M"$ (already of the third order), the elements of which are located at the intersection of rows No. 1, No. 2, No. 5 and columns No. 2, No. 4, No. 5. The elements of the minor $M$ in the figure are highlighted in pink, and The elements we add to the minor $M$ are green:

The minor $M"$ is the bordering minor for the minor $M$. Similarly, adding row No. 4 to the set of rows on which the elements of the minor $M$ lie, and column No. 3 to the set of columns, we obtain the minor $M""$ (third order minor):

The minor $M""$ is also a bordering minor for the minor $M$.

Minor of the kth order of the matrix $A_(n\times n)$. Additional minor. Algebraic complement to the minor of a square matrix.

Let's return to square matrices again. Let us introduce the concept of an additional minor.

Let a certain minor $M$ of the kth order of the matrix $A_(n\times n)$ be given. The determinant of (n-k)th order, the elements of which are obtained from the matrix $A$ after deleting the rows and columns containing the minor $M$, is called a minor, complementary to minor$M$.

For example, consider a fifth-order square matrix: $A=\left(\begin(array) (ccc) -1 & 2 & 0 & -2 & -14\\ 3 & -17 & -3 & 19 & 29\\ 5 & -6 & 8 & -9 & 41\\ -5 & 11 & 16 & -20 & -98\\ -7 & 10 & 14 & -36 & 79 \end(array) \right)$. Let's select rows No. 1 and No. 3, as well as columns No. 2 and No. 5. At the intersection of these rows and columns there will be elements of the minor $M$ of the second order:

Now let’s remove from the matrix $A$ rows No. 1 and No. 3 and columns No. 2 and No. 5, at the intersection of which there are elements of the minor $M$ (the removed rows and columns are shown in red in the figure below). The remaining elements form the minor $M"$:

The minor $M"$, whose order is $5-2=3$, is the complementary minor to the minor $M$.

Algebraic complement to a minor$M$ of a square matrix $A_(n\times n)$ is called the expression $(-1)^(\alpha)\cdot M"$, where $\alpha$ is the sum of the row and column numbers of the matrix $A$, on which the elements of the minor $M$ are located, and $M"$ is the minor complementary to the minor $M$.

The phrase "algebraic complement to the minor $M$" is often replaced by the phrase "algebraic complement to the minor $M$".

For example, consider the matrix $A$, for which we found the second-order minor $ M=\left| \begin(array) (ccc) 2 & -14 \\ -6 & 41 \end(array) \right| $ and its additional third-order minor: $M"=\left| \begin(array) (ccc) 3 & -3 & 19\\ -5 & 16 & -20 \\ -7 & 14 & -36 \end (array) \right|$. Let us denote the algebraic complement of the minor $M$ as $M^*$. Then, according to the definition:

$$ M^*=(-1)^\alpha\cdot M". $$

The $\alpha$ parameter is equal to the sum of the row and column numbers on which the minor $M$ is located. This minor is located at the intersection of rows No. 1, No. 3 and columns No. 2, No. 5. Therefore, $\alpha=1+3+2+5=11$. So:

$$ M^*=(-1)^(11)\cdot M"=-\left| \begin(array) (ccc) 3 & -3 & 19\\ -5 & 16 & -20 \\ -7 & 14 & -36 \end(array) \right|.

In principle, using formula No. 2 from the topic of calculating determinants of the second and third orders, you can complete the calculations, obtaining the value $M^*$:

$$ M^*=-\left| \begin(array) (ccc) 3 & -3 & 19\\ -5 & 16 & -20 \\ -7 & 14 & -36 \end(array) \right|=-30. $$

Definition. If in the n-th order determinant we choose arbitrarily k rows and k columns, then the elements at the intersection of these rows and columns form a square matrix of order k. The determinant of such a square matrix is ​​called minor of kth order .

Denoted by Mk. If k=1, then the first order minor is an element of the determinant.

The elements at the intersection of the remaining (n-k) rows and (n-k) columns form a square matrix of order (n-k). The determinant of such a matrix is ​​called a minor, additional to minor M k . Denoted by Mn-k.

Algebraic complement of the minor M k we will call it an additional minor, taken with a “+” or “-” sign, depending on whether the sum of the numbers of all rows and columns in which the minor M k is located is even or odd.

If k=1, then the algebraic complement to the element a ik calculated by the formula

A ik =(-1) i+k M ik, where M ik- minor (n-1) order.

Theorem. The product of a kth order minor and its algebraic complement is equal to the sum of a certain number of terms of the determinant D n.

Proof

1. Let's consider a special case. Let the minor M k occupy the upper left corner of the determinant, that is, located in lines numbered 1, 2, ..., k, then the minor M n-k will occupy lines k+1, k+2, ..., n.

Let us calculate the algebraic complement to the minor M k . A-priory,

A n-k =(-1) s M n-k, where s=(1+2+...+k) +(1+2+...+k)= 2(1+2+...+k), then

(-1)s=1 and A n-k = M n-k. We get

M k A n-k = M k M n-k. (*)

We take an arbitrary term of the minor M k

, (1)

where s is the number of inversions in the substitution

and an arbitrary minor term M n-k

where s * is the number of inversions in the substitution

(4)

Multiplying (1) and (3), we get

The product consists of n elements located in different rows and columns of the determinant D. Consequently, this product is a member of the determinant D. The sign of the product (5) is determined by the sum of inversions in substitutions (2) and (4), and the sign of a similar product in the determinant D is determined number of inversions s k in the substitution

It is obvious that s k =s+s * .

Thus, returning to equality (*), we obtain that the product M k A n-k consists only of the terms of the determinant.

2. Let minor M k located in rows with numbers i 1 , i 2 , ..., i k and in columns with numbers j 1, j 2, ..., j k, and i 1< i 2 < ...< i k And j 1< j 2 < ...< j k .

Using the properties of determinants, using transpositions we will move the minor to the upper left corner. We obtain the determinant D ¢, in which the minor M k occupies the upper left corner, and the additional minor M¢ n-k is the lower right corner, then, according to what was proven in point 1, we obtain that the product M kn-k is the sum of a certain number of elements of the determinant D ¢, taken with their own sign. But D¢ is obtained from D using ( i 1 -1)+(i 2 -2)+ ...+(i k -k)=(i 1 + i 2 + ...+ i k)-(1+2+...+k) string transpositions and ( j 1 -1)+(j 2 -2)+ ...+(j k -k)=(j 1 + j 2 + ...+ j k)- (1+2+...+k) column transpositions. That is, everything was done


(i 1 + i 2 + ...+ i k)-(1+2+...+k)+ (j 1 + j 2 + ...+ j k)- (1+2+...+k )= (i 1 + i 2 + ...+ i k)+ (j 1 + j 2 + ...+ j k)- 2(1+2+...+k)=s-2(1+2 +...+k). Therefore, the terms of the determinants D and D ¢ differ in sign (-1) s-2(1+2+...+k) =(-1) s, therefore, the product (-1) s M kn-k will consist of a certain number of terms of the determinant D, taken with the same signs as they have in this determinant.

Laplace's theorem. If in the nth order determinant we choose arbitrarily k rows (or k columns) 1£k£n-1, then the sum of the products of all kth order minors contained in the selected rows and their algebraic complements is equal to the determinant D.

Proof

Let's choose random lines i 1 , i 2 , ..., i k and we will prove that

It was previously proven that all elements on the left side of the equality are contained as terms in the determinant D. Let us show that each term in the determinant D falls into only one of the terms. Indeed, everything ts looks like t s =. if in this product we note the factors whose first indices i 1 , i 2 , ..., i k, and compose their product, then you can notice that the resulting product belongs to the kth order minor. Consequently, the remaining terms, taken from the remaining n-k rows and n-k columns, form an element belonging to the complementary minor, and, taking into account the sign, to the algebraic complement, therefore, any ts falls into only one of the products, which proves the theorem.

Consequence(theorem on the expansion of the determinant in a row) . The sum of the products of the elements of a certain row of the determinant and the corresponding algebraic complements is equal to the determinant.

(Proof as an exercise.)

Theorem. The sum of the products of the elements of the i-th row of the determinant by the corresponding algebraic complements to the elements of the j-th row (i¹j) is equal to 0.